2023-07-21 00:14:03,025 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09 2023-07-21 00:14:03,045 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1 timeout: 13 mins 2023-07-21 00:14:03,069 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-21 00:14:03,070 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/cluster_7bdf7489-55f0-e02d-1b93-10dd76f66c48, deleteOnExit=true 2023-07-21 00:14:03,070 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-21 00:14:03,071 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/test.cache.data in system properties and HBase conf 2023-07-21 00:14:03,071 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/hadoop.tmp.dir in system properties and HBase conf 2023-07-21 00:14:03,072 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/hadoop.log.dir in system properties and HBase conf 2023-07-21 00:14:03,072 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-21 00:14:03,073 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-21 00:14:03,073 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-21 00:14:03,190 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-21 00:14:03,598 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-21 00:14:03,604 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-21 00:14:03,604 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-21 00:14:03,605 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-21 00:14:03,605 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-21 00:14:03,606 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-21 00:14:03,606 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-21 00:14:03,607 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-21 00:14:03,607 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-21 00:14:03,608 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-21 00:14:03,608 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/nfs.dump.dir in system properties and HBase conf 2023-07-21 00:14:03,609 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/java.io.tmpdir in system properties and HBase conf 2023-07-21 00:14:03,609 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-21 00:14:03,609 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-21 00:14:03,610 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-21 00:14:04,369 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-21 00:14:04,374 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-21 00:14:04,689 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-21 00:14:04,880 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-21 00:14:04,896 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 00:14:04,936 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 00:14:04,971 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/java.io.tmpdir/Jetty_localhost_37391_hdfs____.cwygfj/webapp 2023-07-21 00:14:05,118 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37391 2023-07-21 00:14:05,128 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-21 00:14:05,128 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-21 00:14:05,633 WARN [Listener at localhost/36751] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 00:14:05,731 WARN [Listener at localhost/36751] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 00:14:05,752 WARN [Listener at localhost/36751] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 00:14:05,759 INFO [Listener at localhost/36751] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 00:14:05,764 INFO [Listener at localhost/36751] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/java.io.tmpdir/Jetty_localhost_43611_datanode____.hvetuq/webapp 2023-07-21 00:14:05,870 INFO [Listener at localhost/36751] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43611 2023-07-21 00:14:06,507 WARN [Listener at localhost/43491] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 00:14:06,606 WARN [Listener at localhost/43491] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 00:14:06,612 WARN [Listener at localhost/43491] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 00:14:06,614 INFO [Listener at localhost/43491] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 00:14:06,625 INFO [Listener at localhost/43491] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/java.io.tmpdir/Jetty_localhost_33775_datanode____.dv6nxg/webapp 2023-07-21 00:14:06,763 INFO [Listener at localhost/43491] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33775 2023-07-21 00:14:06,791 WARN [Listener at localhost/40345] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 00:14:06,930 WARN [Listener at localhost/40345] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 00:14:06,934 WARN [Listener at localhost/40345] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 00:14:06,936 INFO [Listener at localhost/40345] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 00:14:06,944 INFO [Listener at localhost/40345] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/java.io.tmpdir/Jetty_localhost_45253_datanode____zgf8gm/webapp 2023-07-21 00:14:07,078 INFO [Listener at localhost/40345] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45253 2023-07-21 00:14:07,106 WARN [Listener at localhost/41495] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 00:14:07,207 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5924be1706a57e9d: Processing first storage report for DS-f7b638e4-1e18-4db1-a399-9895eb93a2ca from datanode a899477f-5274-4fd3-8839-f328942b70f1 2023-07-21 00:14:07,209 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5924be1706a57e9d: from storage DS-f7b638e4-1e18-4db1-a399-9895eb93a2ca node DatanodeRegistration(127.0.0.1:39689, datanodeUuid=a899477f-5274-4fd3-8839-f328942b70f1, infoPort=34121, infoSecurePort=0, ipcPort=40345, storageInfo=lv=-57;cid=testClusterID;nsid=1731667808;c=1689898444457), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-07-21 00:14:07,209 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x465aa41103715c4: Processing first storage report for DS-6921345a-75ee-43a8-af20-66801e0c34f1 from datanode 80b2fc5a-5cc3-4abc-8604-37c287e2d8f2 2023-07-21 00:14:07,209 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x465aa41103715c4: from storage DS-6921345a-75ee-43a8-af20-66801e0c34f1 node DatanodeRegistration(127.0.0.1:34811, datanodeUuid=80b2fc5a-5cc3-4abc-8604-37c287e2d8f2, infoPort=45291, infoSecurePort=0, ipcPort=43491, storageInfo=lv=-57;cid=testClusterID;nsid=1731667808;c=1689898444457), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 00:14:07,209 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5924be1706a57e9d: Processing first storage report for DS-afb13c95-27d7-4357-a11c-a7a896282088 from datanode a899477f-5274-4fd3-8839-f328942b70f1 2023-07-21 00:14:07,209 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5924be1706a57e9d: from storage DS-afb13c95-27d7-4357-a11c-a7a896282088 node DatanodeRegistration(127.0.0.1:39689, datanodeUuid=a899477f-5274-4fd3-8839-f328942b70f1, infoPort=34121, infoSecurePort=0, ipcPort=40345, storageInfo=lv=-57;cid=testClusterID;nsid=1731667808;c=1689898444457), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 00:14:07,209 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x465aa41103715c4: Processing first storage report for DS-29b3f70f-7b56-410e-879c-293a9e9f4c1f from datanode 80b2fc5a-5cc3-4abc-8604-37c287e2d8f2 2023-07-21 00:14:07,209 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x465aa41103715c4: from storage DS-29b3f70f-7b56-410e-879c-293a9e9f4c1f node DatanodeRegistration(127.0.0.1:34811, datanodeUuid=80b2fc5a-5cc3-4abc-8604-37c287e2d8f2, infoPort=45291, infoSecurePort=0, ipcPort=43491, storageInfo=lv=-57;cid=testClusterID;nsid=1731667808;c=1689898444457), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 00:14:07,256 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x8f172d0ff5b01b9e: Processing first storage report for DS-a3784960-4f9c-4c2b-82ba-c13e483fd9aa from datanode fbe2f6f3-31c9-46ac-bb54-bf23bf6d370f 2023-07-21 00:14:07,256 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x8f172d0ff5b01b9e: from storage DS-a3784960-4f9c-4c2b-82ba-c13e483fd9aa node DatanodeRegistration(127.0.0.1:42719, datanodeUuid=fbe2f6f3-31c9-46ac-bb54-bf23bf6d370f, infoPort=33259, infoSecurePort=0, ipcPort=41495, storageInfo=lv=-57;cid=testClusterID;nsid=1731667808;c=1689898444457), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 00:14:07,257 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x8f172d0ff5b01b9e: Processing first storage report for DS-b414f303-9530-45df-92d5-b55558ec293d from datanode fbe2f6f3-31c9-46ac-bb54-bf23bf6d370f 2023-07-21 00:14:07,257 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x8f172d0ff5b01b9e: from storage DS-b414f303-9530-45df-92d5-b55558ec293d node DatanodeRegistration(127.0.0.1:42719, datanodeUuid=fbe2f6f3-31c9-46ac-bb54-bf23bf6d370f, infoPort=33259, infoSecurePort=0, ipcPort=41495, storageInfo=lv=-57;cid=testClusterID;nsid=1731667808;c=1689898444457), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 00:14:07,531 DEBUG [Listener at localhost/41495] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09 2023-07-21 00:14:07,624 INFO [Listener at localhost/41495] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/cluster_7bdf7489-55f0-e02d-1b93-10dd76f66c48/zookeeper_0, clientPort=60276, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/cluster_7bdf7489-55f0-e02d-1b93-10dd76f66c48/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/cluster_7bdf7489-55f0-e02d-1b93-10dd76f66c48/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-21 00:14:07,641 INFO [Listener at localhost/41495] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=60276 2023-07-21 00:14:07,651 INFO [Listener at localhost/41495] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 00:14:07,654 INFO [Listener at localhost/41495] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 00:14:08,331 INFO [Listener at localhost/41495] util.FSUtils(471): Created version file at hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336 with version=8 2023-07-21 00:14:08,331 INFO [Listener at localhost/41495] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/hbase-staging 2023-07-21 00:14:08,339 DEBUG [Listener at localhost/41495] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-21 00:14:08,339 DEBUG [Listener at localhost/41495] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-21 00:14:08,340 DEBUG [Listener at localhost/41495] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-21 00:14:08,340 DEBUG [Listener at localhost/41495] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-21 00:14:08,723 INFO [Listener at localhost/41495] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-21 00:14:09,411 INFO [Listener at localhost/41495] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-21 00:14:09,452 INFO [Listener at localhost/41495] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 00:14:09,452 INFO [Listener at localhost/41495] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 00:14:09,453 INFO [Listener at localhost/41495] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 00:14:09,453 INFO [Listener at localhost/41495] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 00:14:09,453 INFO [Listener at localhost/41495] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 00:14:09,616 INFO [Listener at localhost/41495] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 00:14:09,721 DEBUG [Listener at localhost/41495] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-21 00:14:09,846 INFO [Listener at localhost/41495] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33855 2023-07-21 00:14:09,859 INFO [Listener at localhost/41495] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 00:14:09,861 INFO [Listener at localhost/41495] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 00:14:09,890 INFO [Listener at localhost/41495] zookeeper.RecoverableZooKeeper(93): Process identifier=master:33855 connecting to ZooKeeper ensemble=127.0.0.1:60276 2023-07-21 00:14:09,948 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): master:338550x0, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 00:14:09,953 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:33855-0x101853a75f20000 connected 2023-07-21 00:14:10,008 DEBUG [Listener at localhost/41495] zookeeper.ZKUtil(164): master:33855-0x101853a75f20000, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 00:14:10,009 DEBUG [Listener at localhost/41495] zookeeper.ZKUtil(164): master:33855-0x101853a75f20000, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 00:14:10,013 DEBUG [Listener at localhost/41495] zookeeper.ZKUtil(164): master:33855-0x101853a75f20000, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 00:14:10,022 DEBUG [Listener at localhost/41495] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33855 2023-07-21 00:14:10,022 DEBUG [Listener at localhost/41495] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33855 2023-07-21 00:14:10,023 DEBUG [Listener at localhost/41495] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33855 2023-07-21 00:14:10,024 DEBUG [Listener at localhost/41495] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33855 2023-07-21 00:14:10,025 DEBUG [Listener at localhost/41495] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33855 2023-07-21 00:14:10,062 INFO [Listener at localhost/41495] log.Log(170): Logging initialized @7873ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-21 00:14:10,203 INFO [Listener at localhost/41495] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 00:14:10,203 INFO [Listener at localhost/41495] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 00:14:10,204 INFO [Listener at localhost/41495] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 00:14:10,206 INFO [Listener at localhost/41495] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-21 00:14:10,206 INFO [Listener at localhost/41495] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 00:14:10,206 INFO [Listener at localhost/41495] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 00:14:10,210 INFO [Listener at localhost/41495] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 00:14:10,282 INFO [Listener at localhost/41495] http.HttpServer(1146): Jetty bound to port 46219 2023-07-21 00:14:10,284 INFO [Listener at localhost/41495] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 00:14:10,320 INFO [Listener at localhost/41495] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 00:14:10,325 INFO [Listener at localhost/41495] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3ec3711a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/hadoop.log.dir/,AVAILABLE} 2023-07-21 00:14:10,326 INFO [Listener at localhost/41495] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 00:14:10,326 INFO [Listener at localhost/41495] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5b89ffdd{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 00:14:10,517 INFO [Listener at localhost/41495] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 00:14:10,529 INFO [Listener at localhost/41495] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 00:14:10,530 INFO [Listener at localhost/41495] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 00:14:10,532 INFO [Listener at localhost/41495] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 00:14:10,539 INFO [Listener at localhost/41495] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 00:14:10,567 INFO [Listener at localhost/41495] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@96a2503{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/java.io.tmpdir/jetty-0_0_0_0-46219-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5371193791708242563/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-21 00:14:10,582 INFO [Listener at localhost/41495] server.AbstractConnector(333): Started ServerConnector@49aa43f{HTTP/1.1, (http/1.1)}{0.0.0.0:46219} 2023-07-21 00:14:10,582 INFO [Listener at localhost/41495] server.Server(415): Started @8394ms 2023-07-21 00:14:10,587 INFO [Listener at localhost/41495] master.HMaster(444): hbase.rootdir=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336, hbase.cluster.distributed=false 2023-07-21 00:14:10,682 INFO [Listener at localhost/41495] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-21 00:14:10,683 INFO [Listener at localhost/41495] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 00:14:10,683 INFO [Listener at localhost/41495] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 00:14:10,683 INFO [Listener at localhost/41495] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 00:14:10,683 INFO [Listener at localhost/41495] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 00:14:10,683 INFO [Listener at localhost/41495] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 00:14:10,689 INFO [Listener at localhost/41495] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 00:14:10,692 INFO [Listener at localhost/41495] ipc.NettyRpcServer(120): Bind to /172.31.14.131:42163 2023-07-21 00:14:10,695 INFO [Listener at localhost/41495] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 00:14:10,703 DEBUG [Listener at localhost/41495] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 00:14:10,704 INFO [Listener at localhost/41495] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 00:14:10,706 INFO [Listener at localhost/41495] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 00:14:10,709 INFO [Listener at localhost/41495] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:42163 connecting to ZooKeeper ensemble=127.0.0.1:60276 2023-07-21 00:14:10,713 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): regionserver:421630x0, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 00:14:10,715 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:42163-0x101853a75f20001 connected 2023-07-21 00:14:10,715 DEBUG [Listener at localhost/41495] zookeeper.ZKUtil(164): regionserver:42163-0x101853a75f20001, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 00:14:10,716 DEBUG [Listener at localhost/41495] zookeeper.ZKUtil(164): regionserver:42163-0x101853a75f20001, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 00:14:10,717 DEBUG [Listener at localhost/41495] zookeeper.ZKUtil(164): regionserver:42163-0x101853a75f20001, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 00:14:10,718 DEBUG [Listener at localhost/41495] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42163 2023-07-21 00:14:10,719 DEBUG [Listener at localhost/41495] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42163 2023-07-21 00:14:10,720 DEBUG [Listener at localhost/41495] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42163 2023-07-21 00:14:10,720 DEBUG [Listener at localhost/41495] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42163 2023-07-21 00:14:10,721 DEBUG [Listener at localhost/41495] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42163 2023-07-21 00:14:10,723 INFO [Listener at localhost/41495] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 00:14:10,724 INFO [Listener at localhost/41495] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 00:14:10,724 INFO [Listener at localhost/41495] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 00:14:10,725 INFO [Listener at localhost/41495] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 00:14:10,725 INFO [Listener at localhost/41495] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 00:14:10,725 INFO [Listener at localhost/41495] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 00:14:10,726 INFO [Listener at localhost/41495] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 00:14:10,727 INFO [Listener at localhost/41495] http.HttpServer(1146): Jetty bound to port 37207 2023-07-21 00:14:10,728 INFO [Listener at localhost/41495] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 00:14:10,733 INFO [Listener at localhost/41495] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 00:14:10,734 INFO [Listener at localhost/41495] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@72780387{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/hadoop.log.dir/,AVAILABLE} 2023-07-21 00:14:10,734 INFO [Listener at localhost/41495] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 00:14:10,734 INFO [Listener at localhost/41495] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3751ed89{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 00:14:10,866 INFO [Listener at localhost/41495] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 00:14:10,868 INFO [Listener at localhost/41495] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 00:14:10,868 INFO [Listener at localhost/41495] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 00:14:10,869 INFO [Listener at localhost/41495] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 00:14:10,870 INFO [Listener at localhost/41495] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 00:14:10,874 INFO [Listener at localhost/41495] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@681f0c80{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/java.io.tmpdir/jetty-0_0_0_0-37207-hbase-server-2_4_18-SNAPSHOT_jar-_-any-556384162685976033/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 00:14:10,876 INFO [Listener at localhost/41495] server.AbstractConnector(333): Started ServerConnector@26730011{HTTP/1.1, (http/1.1)}{0.0.0.0:37207} 2023-07-21 00:14:10,876 INFO [Listener at localhost/41495] server.Server(415): Started @8688ms 2023-07-21 00:14:10,891 INFO [Listener at localhost/41495] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-21 00:14:10,891 INFO [Listener at localhost/41495] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 00:14:10,891 INFO [Listener at localhost/41495] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 00:14:10,892 INFO [Listener at localhost/41495] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 00:14:10,892 INFO [Listener at localhost/41495] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 00:14:10,892 INFO [Listener at localhost/41495] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 00:14:10,892 INFO [Listener at localhost/41495] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 00:14:10,894 INFO [Listener at localhost/41495] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33545 2023-07-21 00:14:10,895 INFO [Listener at localhost/41495] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 00:14:10,896 DEBUG [Listener at localhost/41495] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 00:14:10,897 INFO [Listener at localhost/41495] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 00:14:10,899 INFO [Listener at localhost/41495] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 00:14:10,900 INFO [Listener at localhost/41495] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33545 connecting to ZooKeeper ensemble=127.0.0.1:60276 2023-07-21 00:14:10,905 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): regionserver:335450x0, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 00:14:10,906 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33545-0x101853a75f20002 connected 2023-07-21 00:14:10,906 DEBUG [Listener at localhost/41495] zookeeper.ZKUtil(164): regionserver:33545-0x101853a75f20002, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 00:14:10,907 DEBUG [Listener at localhost/41495] zookeeper.ZKUtil(164): regionserver:33545-0x101853a75f20002, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 00:14:10,908 DEBUG [Listener at localhost/41495] zookeeper.ZKUtil(164): regionserver:33545-0x101853a75f20002, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 00:14:10,908 DEBUG [Listener at localhost/41495] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33545 2023-07-21 00:14:10,911 DEBUG [Listener at localhost/41495] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33545 2023-07-21 00:14:10,911 DEBUG [Listener at localhost/41495] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33545 2023-07-21 00:14:10,914 DEBUG [Listener at localhost/41495] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33545 2023-07-21 00:14:10,915 DEBUG [Listener at localhost/41495] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33545 2023-07-21 00:14:10,917 INFO [Listener at localhost/41495] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 00:14:10,917 INFO [Listener at localhost/41495] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 00:14:10,918 INFO [Listener at localhost/41495] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 00:14:10,918 INFO [Listener at localhost/41495] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 00:14:10,918 INFO [Listener at localhost/41495] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 00:14:10,918 INFO [Listener at localhost/41495] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 00:14:10,919 INFO [Listener at localhost/41495] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 00:14:10,919 INFO [Listener at localhost/41495] http.HttpServer(1146): Jetty bound to port 35429 2023-07-21 00:14:10,919 INFO [Listener at localhost/41495] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 00:14:10,924 INFO [Listener at localhost/41495] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 00:14:10,924 INFO [Listener at localhost/41495] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4d9efc77{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/hadoop.log.dir/,AVAILABLE} 2023-07-21 00:14:10,925 INFO [Listener at localhost/41495] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 00:14:10,925 INFO [Listener at localhost/41495] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6e075be0{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 00:14:11,072 INFO [Listener at localhost/41495] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 00:14:11,073 INFO [Listener at localhost/41495] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 00:14:11,074 INFO [Listener at localhost/41495] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 00:14:11,074 INFO [Listener at localhost/41495] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 00:14:11,077 INFO [Listener at localhost/41495] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 00:14:11,078 INFO [Listener at localhost/41495] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3000c365{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/java.io.tmpdir/jetty-0_0_0_0-35429-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5917619957355025969/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 00:14:11,079 INFO [Listener at localhost/41495] server.AbstractConnector(333): Started ServerConnector@44a214a1{HTTP/1.1, (http/1.1)}{0.0.0.0:35429} 2023-07-21 00:14:11,079 INFO [Listener at localhost/41495] server.Server(415): Started @8891ms 2023-07-21 00:14:11,098 INFO [Listener at localhost/41495] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-21 00:14:11,099 INFO [Listener at localhost/41495] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 00:14:11,099 INFO [Listener at localhost/41495] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 00:14:11,099 INFO [Listener at localhost/41495] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 00:14:11,099 INFO [Listener at localhost/41495] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 00:14:11,100 INFO [Listener at localhost/41495] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 00:14:11,100 INFO [Listener at localhost/41495] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 00:14:11,103 INFO [Listener at localhost/41495] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46101 2023-07-21 00:14:11,103 INFO [Listener at localhost/41495] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 00:14:11,107 DEBUG [Listener at localhost/41495] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 00:14:11,108 INFO [Listener at localhost/41495] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 00:14:11,110 INFO [Listener at localhost/41495] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 00:14:11,112 INFO [Listener at localhost/41495] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46101 connecting to ZooKeeper ensemble=127.0.0.1:60276 2023-07-21 00:14:11,118 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): regionserver:461010x0, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 00:14:11,120 DEBUG [Listener at localhost/41495] zookeeper.ZKUtil(164): regionserver:461010x0, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 00:14:11,120 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46101-0x101853a75f20003 connected 2023-07-21 00:14:11,121 DEBUG [Listener at localhost/41495] zookeeper.ZKUtil(164): regionserver:46101-0x101853a75f20003, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 00:14:11,122 DEBUG [Listener at localhost/41495] zookeeper.ZKUtil(164): regionserver:46101-0x101853a75f20003, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 00:14:11,125 DEBUG [Listener at localhost/41495] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46101 2023-07-21 00:14:11,126 DEBUG [Listener at localhost/41495] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46101 2023-07-21 00:14:11,128 DEBUG [Listener at localhost/41495] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46101 2023-07-21 00:14:11,128 DEBUG [Listener at localhost/41495] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46101 2023-07-21 00:14:11,130 DEBUG [Listener at localhost/41495] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46101 2023-07-21 00:14:11,133 INFO [Listener at localhost/41495] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 00:14:11,133 INFO [Listener at localhost/41495] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 00:14:11,133 INFO [Listener at localhost/41495] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 00:14:11,134 INFO [Listener at localhost/41495] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 00:14:11,134 INFO [Listener at localhost/41495] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 00:14:11,134 INFO [Listener at localhost/41495] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 00:14:11,135 INFO [Listener at localhost/41495] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 00:14:11,135 INFO [Listener at localhost/41495] http.HttpServer(1146): Jetty bound to port 40701 2023-07-21 00:14:11,136 INFO [Listener at localhost/41495] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 00:14:11,143 INFO [Listener at localhost/41495] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 00:14:11,143 INFO [Listener at localhost/41495] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1b1103b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/hadoop.log.dir/,AVAILABLE} 2023-07-21 00:14:11,143 INFO [Listener at localhost/41495] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 00:14:11,144 INFO [Listener at localhost/41495] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3b4c0447{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 00:14:11,315 INFO [Listener at localhost/41495] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 00:14:11,316 INFO [Listener at localhost/41495] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 00:14:11,317 INFO [Listener at localhost/41495] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 00:14:11,317 INFO [Listener at localhost/41495] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 00:14:11,318 INFO [Listener at localhost/41495] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 00:14:11,320 INFO [Listener at localhost/41495] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3a992b6f{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/java.io.tmpdir/jetty-0_0_0_0-40701-hbase-server-2_4_18-SNAPSHOT_jar-_-any-186306279473331687/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 00:14:11,321 INFO [Listener at localhost/41495] server.AbstractConnector(333): Started ServerConnector@150955d2{HTTP/1.1, (http/1.1)}{0.0.0.0:40701} 2023-07-21 00:14:11,321 INFO [Listener at localhost/41495] server.Server(415): Started @9133ms 2023-07-21 00:14:11,328 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 00:14:11,332 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@71c2565{HTTP/1.1, (http/1.1)}{0.0.0.0:35471} 2023-07-21 00:14:11,332 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @9144ms 2023-07-21 00:14:11,332 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,33855,1689898448530 2023-07-21 00:14:11,343 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): master:33855-0x101853a75f20000, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-21 00:14:11,344 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:33855-0x101853a75f20000, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,33855,1689898448530 2023-07-21 00:14:11,366 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): regionserver:33545-0x101853a75f20002, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 00:14:11,366 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): regionserver:46101-0x101853a75f20003, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 00:14:11,366 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): master:33855-0x101853a75f20000, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 00:14:11,367 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): master:33855-0x101853a75f20000, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 00:14:11,367 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): regionserver:42163-0x101853a75f20001, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 00:14:11,368 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:33855-0x101853a75f20000, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 00:14:11,370 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,33855,1689898448530 from backup master directory 2023-07-21 00:14:11,371 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:33855-0x101853a75f20000, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 00:14:11,377 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): master:33855-0x101853a75f20000, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,33855,1689898448530 2023-07-21 00:14:11,377 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): master:33855-0x101853a75f20000, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-21 00:14:11,378 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 00:14:11,378 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,33855,1689898448530 2023-07-21 00:14:11,382 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-21 00:14:11,383 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-21 00:14:11,535 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/hbase.id with ID: f9b7e694-e773-4815-bf14-1a8b338e3705 2023-07-21 00:14:11,580 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 00:14:11,598 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): master:33855-0x101853a75f20000, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 00:14:11,704 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x007d59e8 to 127.0.0.1:60276 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 00:14:11,761 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2392fc65, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 00:14:11,790 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 00:14:11,792 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-21 00:14:11,815 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-21 00:14:11,816 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-21 00:14:11,818 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-21 00:14:11,823 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-21 00:14:11,825 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 00:14:11,868 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/MasterData/data/master/store-tmp 2023-07-21 00:14:11,913 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:11,913 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-21 00:14:11,913 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 00:14:11,913 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 00:14:11,914 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-21 00:14:11,914 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 00:14:11,914 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 00:14:11,914 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 00:14:11,915 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/MasterData/WALs/jenkins-hbase4.apache.org,33855,1689898448530 2023-07-21 00:14:11,945 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33855%2C1689898448530, suffix=, logDir=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/MasterData/WALs/jenkins-hbase4.apache.org,33855,1689898448530, archiveDir=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/MasterData/oldWALs, maxLogs=10 2023-07-21 00:14:12,030 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39689,DS-f7b638e4-1e18-4db1-a399-9895eb93a2ca,DISK] 2023-07-21 00:14:12,030 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42719,DS-a3784960-4f9c-4c2b-82ba-c13e483fd9aa,DISK] 2023-07-21 00:14:12,030 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34811,DS-6921345a-75ee-43a8-af20-66801e0c34f1,DISK] 2023-07-21 00:14:12,040 DEBUG [RS-EventLoopGroup-5-3] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-21 00:14:12,130 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/MasterData/WALs/jenkins-hbase4.apache.org,33855,1689898448530/jenkins-hbase4.apache.org%2C33855%2C1689898448530.1689898451959 2023-07-21 00:14:12,132 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42719,DS-a3784960-4f9c-4c2b-82ba-c13e483fd9aa,DISK], DatanodeInfoWithStorage[127.0.0.1:39689,DS-f7b638e4-1e18-4db1-a399-9895eb93a2ca,DISK], DatanodeInfoWithStorage[127.0.0.1:34811,DS-6921345a-75ee-43a8-af20-66801e0c34f1,DISK]] 2023-07-21 00:14:12,133 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-21 00:14:12,133 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:12,138 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 00:14:12,139 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 00:14:12,228 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-21 00:14:12,237 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-21 00:14:12,278 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-21 00:14:12,296 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:12,304 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-21 00:14:12,306 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-21 00:14:12,326 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 00:14:12,334 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 00:14:12,339 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10784232000, jitterRate=0.00435987114906311}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 00:14:12,340 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 00:14:12,341 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-21 00:14:12,376 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-21 00:14:12,376 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-21 00:14:12,380 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-21 00:14:12,383 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-21 00:14:12,433 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 50 msec 2023-07-21 00:14:12,433 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-21 00:14:12,477 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-21 00:14:12,485 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-21 00:14:12,496 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33855-0x101853a75f20000, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-21 00:14:12,504 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-21 00:14:12,511 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33855-0x101853a75f20000, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-21 00:14:12,514 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): master:33855-0x101853a75f20000, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 00:14:12,515 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33855-0x101853a75f20000, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-21 00:14:12,516 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33855-0x101853a75f20000, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-21 00:14:12,537 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33855-0x101853a75f20000, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-21 00:14:12,545 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): master:33855-0x101853a75f20000, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 00:14:12,545 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): regionserver:33545-0x101853a75f20002, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 00:14:12,545 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): regionserver:46101-0x101853a75f20003, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 00:14:12,545 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): regionserver:42163-0x101853a75f20001, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 00:14:12,545 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): master:33855-0x101853a75f20000, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 00:14:12,546 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,33855,1689898448530, sessionid=0x101853a75f20000, setting cluster-up flag (Was=false) 2023-07-21 00:14:12,569 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): master:33855-0x101853a75f20000, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 00:14:12,576 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-21 00:14:12,580 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,33855,1689898448530 2023-07-21 00:14:12,586 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): master:33855-0x101853a75f20000, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 00:14:12,593 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-21 00:14:12,594 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,33855,1689898448530 2023-07-21 00:14:12,598 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.hbase-snapshot/.tmp 2023-07-21 00:14:12,635 INFO [RS:1;jenkins-hbase4:33545] regionserver.HRegionServer(951): ClusterId : f9b7e694-e773-4815-bf14-1a8b338e3705 2023-07-21 00:14:12,635 INFO [RS:0;jenkins-hbase4:42163] regionserver.HRegionServer(951): ClusterId : f9b7e694-e773-4815-bf14-1a8b338e3705 2023-07-21 00:14:12,636 INFO [RS:2;jenkins-hbase4:46101] regionserver.HRegionServer(951): ClusterId : f9b7e694-e773-4815-bf14-1a8b338e3705 2023-07-21 00:14:12,644 DEBUG [RS:1;jenkins-hbase4:33545] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 00:14:12,644 DEBUG [RS:0;jenkins-hbase4:42163] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 00:14:12,644 DEBUG [RS:2;jenkins-hbase4:46101] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 00:14:12,653 DEBUG [RS:2;jenkins-hbase4:46101] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 00:14:12,653 DEBUG [RS:0;jenkins-hbase4:42163] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 00:14:12,653 DEBUG [RS:1;jenkins-hbase4:33545] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 00:14:12,653 DEBUG [RS:0;jenkins-hbase4:42163] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 00:14:12,653 DEBUG [RS:2;jenkins-hbase4:46101] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 00:14:12,653 DEBUG [RS:1;jenkins-hbase4:33545] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 00:14:12,658 DEBUG [RS:1;jenkins-hbase4:33545] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 00:14:12,658 DEBUG [RS:0;jenkins-hbase4:42163] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 00:14:12,658 DEBUG [RS:2;jenkins-hbase4:46101] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 00:14:12,660 DEBUG [RS:1;jenkins-hbase4:33545] zookeeper.ReadOnlyZKClient(139): Connect 0x65facd7f to 127.0.0.1:60276 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 00:14:12,661 DEBUG [RS:0;jenkins-hbase4:42163] zookeeper.ReadOnlyZKClient(139): Connect 0x2eb7c2fc to 127.0.0.1:60276 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 00:14:12,661 DEBUG [RS:2;jenkins-hbase4:46101] zookeeper.ReadOnlyZKClient(139): Connect 0x1f0e270d to 127.0.0.1:60276 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 00:14:12,671 DEBUG [RS:1;jenkins-hbase4:33545] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6939474c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 00:14:12,672 DEBUG [RS:1;jenkins-hbase4:33545] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@377a33ea, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-21 00:14:12,675 DEBUG [RS:2;jenkins-hbase4:46101] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5a0cba88, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 00:14:12,676 DEBUG [RS:2;jenkins-hbase4:46101] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7d3e9010, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-21 00:14:12,677 DEBUG [RS:0;jenkins-hbase4:42163] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@18723a89, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 00:14:12,677 DEBUG [RS:0;jenkins-hbase4:42163] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3c01788, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-21 00:14:12,703 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-21 00:14:12,705 DEBUG [RS:0;jenkins-hbase4:42163] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:42163 2023-07-21 00:14:12,707 DEBUG [RS:2;jenkins-hbase4:46101] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:46101 2023-07-21 00:14:12,708 DEBUG [RS:1;jenkins-hbase4:33545] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:33545 2023-07-21 00:14:12,711 INFO [RS:0;jenkins-hbase4:42163] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 00:14:12,711 INFO [RS:1;jenkins-hbase4:33545] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 00:14:12,712 INFO [RS:1;jenkins-hbase4:33545] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 00:14:12,711 INFO [RS:2;jenkins-hbase4:46101] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 00:14:12,712 INFO [RS:2;jenkins-hbase4:46101] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 00:14:12,712 INFO [RS:0;jenkins-hbase4:42163] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 00:14:12,712 DEBUG [RS:2;jenkins-hbase4:46101] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 00:14:12,712 DEBUG [RS:1;jenkins-hbase4:33545] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 00:14:12,713 DEBUG [RS:0;jenkins-hbase4:42163] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 00:14:12,716 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-21 00:14:12,716 INFO [RS:1;jenkins-hbase4:33545] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,33855,1689898448530 with isa=jenkins-hbase4.apache.org/172.31.14.131:33545, startcode=1689898450890 2023-07-21 00:14:12,716 INFO [RS:2;jenkins-hbase4:46101] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,33855,1689898448530 with isa=jenkins-hbase4.apache.org/172.31.14.131:46101, startcode=1689898451098 2023-07-21 00:14:12,716 INFO [RS:0;jenkins-hbase4:42163] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,33855,1689898448530 with isa=jenkins-hbase4.apache.org/172.31.14.131:42163, startcode=1689898450682 2023-07-21 00:14:12,718 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33855,1689898448530] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 00:14:12,721 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-21 00:14:12,721 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-21 00:14:12,743 DEBUG [RS:1;jenkins-hbase4:33545] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 00:14:12,743 DEBUG [RS:2;jenkins-hbase4:46101] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 00:14:12,744 DEBUG [RS:0;jenkins-hbase4:42163] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 00:14:12,827 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45445, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 00:14:12,828 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59617, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 00:14:12,827 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52403, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 00:14:12,837 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33855] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 00:14:12,839 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-21 00:14:12,847 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33855] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 00:14:12,848 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33855] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 00:14:12,879 DEBUG [RS:2;jenkins-hbase4:46101] regionserver.HRegionServer(2830): Master is not running yet 2023-07-21 00:14:12,880 WARN [RS:2;jenkins-hbase4:46101] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-21 00:14:12,879 DEBUG [RS:0;jenkins-hbase4:42163] regionserver.HRegionServer(2830): Master is not running yet 2023-07-21 00:14:12,879 DEBUG [RS:1;jenkins-hbase4:33545] regionserver.HRegionServer(2830): Master is not running yet 2023-07-21 00:14:12,881 WARN [RS:0;jenkins-hbase4:42163] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-21 00:14:12,881 WARN [RS:1;jenkins-hbase4:33545] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-21 00:14:12,881 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-21 00:14:12,888 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-21 00:14:12,889 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-21 00:14:12,889 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-21 00:14:12,891 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-21 00:14:12,891 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-21 00:14:12,892 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-21 00:14:12,892 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-21 00:14:12,892 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-21 00:14:12,892 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:12,892 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-21 00:14:12,892 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:12,896 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689898482896 2023-07-21 00:14:12,899 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-21 00:14:12,903 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-21 00:14:12,904 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-21 00:14:12,906 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-21 00:14:12,910 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-21 00:14:12,914 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-21 00:14:12,915 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-21 00:14:12,916 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-21 00:14:12,916 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-21 00:14:12,917 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:12,920 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-21 00:14:12,922 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-21 00:14:12,923 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-21 00:14:12,926 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-21 00:14:12,926 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-21 00:14:12,928 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689898452928,5,FailOnTimeoutGroup] 2023-07-21 00:14:12,928 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689898452928,5,FailOnTimeoutGroup] 2023-07-21 00:14:12,928 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:12,929 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-21 00:14:12,930 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:12,930 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:12,973 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-21 00:14:12,975 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-21 00:14:12,976 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336 2023-07-21 00:14:12,981 INFO [RS:2;jenkins-hbase4:46101] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,33855,1689898448530 with isa=jenkins-hbase4.apache.org/172.31.14.131:46101, startcode=1689898451098 2023-07-21 00:14:12,982 INFO [RS:1;jenkins-hbase4:33545] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,33855,1689898448530 with isa=jenkins-hbase4.apache.org/172.31.14.131:33545, startcode=1689898450890 2023-07-21 00:14:12,983 INFO [RS:0;jenkins-hbase4:42163] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,33855,1689898448530 with isa=jenkins-hbase4.apache.org/172.31.14.131:42163, startcode=1689898450682 2023-07-21 00:14:12,988 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33855] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,46101,1689898451098 2023-07-21 00:14:12,990 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33855,1689898448530] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 00:14:12,993 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33855,1689898448530] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-21 00:14:12,998 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33855] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,42163,1689898450682 2023-07-21 00:14:12,999 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33855,1689898448530] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 00:14:13,000 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33855,1689898448530] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-21 00:14:13,000 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33855] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,33545,1689898450890 2023-07-21 00:14:13,000 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33855,1689898448530] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 00:14:13,000 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33855,1689898448530] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-21 00:14:13,003 DEBUG [RS:2;jenkins-hbase4:46101] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336 2023-07-21 00:14:13,003 DEBUG [RS:2;jenkins-hbase4:46101] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:36751 2023-07-21 00:14:13,003 DEBUG [RS:2;jenkins-hbase4:46101] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=46219 2023-07-21 00:14:13,006 DEBUG [RS:1;jenkins-hbase4:33545] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336 2023-07-21 00:14:13,007 DEBUG [RS:0;jenkins-hbase4:42163] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336 2023-07-21 00:14:13,007 DEBUG [RS:1;jenkins-hbase4:33545] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:36751 2023-07-21 00:14:13,007 DEBUG [RS:0;jenkins-hbase4:42163] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:36751 2023-07-21 00:14:13,007 DEBUG [RS:1;jenkins-hbase4:33545] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=46219 2023-07-21 00:14:13,007 DEBUG [RS:0;jenkins-hbase4:42163] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=46219 2023-07-21 00:14:13,022 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:13,025 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 00:14:13,027 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): master:33855-0x101853a75f20000, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 00:14:13,029 DEBUG [RS:1;jenkins-hbase4:33545] zookeeper.ZKUtil(162): regionserver:33545-0x101853a75f20002, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33545,1689898450890 2023-07-21 00:14:13,029 DEBUG [RS:0;jenkins-hbase4:42163] zookeeper.ZKUtil(162): regionserver:42163-0x101853a75f20001, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42163,1689898450682 2023-07-21 00:14:13,029 WARN [RS:1;jenkins-hbase4:33545] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 00:14:13,030 INFO [RS:1;jenkins-hbase4:33545] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 00:14:13,030 DEBUG [RS:1;jenkins-hbase4:33545] regionserver.HRegionServer(1948): logDir=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/WALs/jenkins-hbase4.apache.org,33545,1689898450890 2023-07-21 00:14:13,031 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,33545,1689898450890] 2023-07-21 00:14:13,031 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,42163,1689898450682] 2023-07-21 00:14:13,031 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,46101,1689898451098] 2023-07-21 00:14:13,029 WARN [RS:0;jenkins-hbase4:42163] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 00:14:13,031 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/hbase/meta/1588230740/info 2023-07-21 00:14:13,030 DEBUG [RS:2;jenkins-hbase4:46101] zookeeper.ZKUtil(162): regionserver:46101-0x101853a75f20003, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46101,1689898451098 2023-07-21 00:14:13,032 WARN [RS:2;jenkins-hbase4:46101] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 00:14:13,032 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 00:14:13,034 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:13,034 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 00:14:13,032 INFO [RS:0;jenkins-hbase4:42163] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 00:14:13,032 INFO [RS:2;jenkins-hbase4:46101] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 00:14:13,040 DEBUG [RS:0;jenkins-hbase4:42163] regionserver.HRegionServer(1948): logDir=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/WALs/jenkins-hbase4.apache.org,42163,1689898450682 2023-07-21 00:14:13,040 DEBUG [RS:2;jenkins-hbase4:46101] regionserver.HRegionServer(1948): logDir=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/WALs/jenkins-hbase4.apache.org,46101,1689898451098 2023-07-21 00:14:13,052 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/hbase/meta/1588230740/rep_barrier 2023-07-21 00:14:13,052 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 00:14:13,054 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:13,055 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 00:14:13,060 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/hbase/meta/1588230740/table 2023-07-21 00:14:13,060 DEBUG [RS:1;jenkins-hbase4:33545] zookeeper.ZKUtil(162): regionserver:33545-0x101853a75f20002, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33545,1689898450890 2023-07-21 00:14:13,060 DEBUG [RS:0;jenkins-hbase4:42163] zookeeper.ZKUtil(162): regionserver:42163-0x101853a75f20001, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33545,1689898450890 2023-07-21 00:14:13,061 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 00:14:13,064 DEBUG [RS:1;jenkins-hbase4:33545] zookeeper.ZKUtil(162): regionserver:33545-0x101853a75f20002, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46101,1689898451098 2023-07-21 00:14:13,064 DEBUG [RS:0;jenkins-hbase4:42163] zookeeper.ZKUtil(162): regionserver:42163-0x101853a75f20001, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46101,1689898451098 2023-07-21 00:14:13,065 DEBUG [RS:1;jenkins-hbase4:33545] zookeeper.ZKUtil(162): regionserver:33545-0x101853a75f20002, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42163,1689898450682 2023-07-21 00:14:13,066 DEBUG [RS:0;jenkins-hbase4:42163] zookeeper.ZKUtil(162): regionserver:42163-0x101853a75f20001, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42163,1689898450682 2023-07-21 00:14:13,067 DEBUG [RS:2;jenkins-hbase4:46101] zookeeper.ZKUtil(162): regionserver:46101-0x101853a75f20003, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33545,1689898450890 2023-07-21 00:14:13,068 DEBUG [RS:2;jenkins-hbase4:46101] zookeeper.ZKUtil(162): regionserver:46101-0x101853a75f20003, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46101,1689898451098 2023-07-21 00:14:13,068 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:13,070 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/hbase/meta/1588230740 2023-07-21 00:14:13,071 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/hbase/meta/1588230740 2023-07-21 00:14:13,073 DEBUG [RS:2;jenkins-hbase4:46101] zookeeper.ZKUtil(162): regionserver:46101-0x101853a75f20003, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42163,1689898450682 2023-07-21 00:14:13,076 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 00:14:13,079 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 00:14:13,088 DEBUG [RS:2;jenkins-hbase4:46101] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 00:14:13,089 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 00:14:13,088 DEBUG [RS:0;jenkins-hbase4:42163] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 00:14:13,088 DEBUG [RS:1;jenkins-hbase4:33545] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 00:14:13,090 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9623253120, jitterRate=-0.10376471281051636}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 00:14:13,091 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 00:14:13,091 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 00:14:13,091 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 00:14:13,091 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 00:14:13,091 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 00:14:13,091 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 00:14:13,092 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 00:14:13,092 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 00:14:13,100 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-21 00:14:13,101 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-21 00:14:13,106 INFO [RS:2;jenkins-hbase4:46101] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 00:14:13,107 INFO [RS:1;jenkins-hbase4:33545] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 00:14:13,108 INFO [RS:0;jenkins-hbase4:42163] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 00:14:13,113 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-21 00:14:13,130 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-21 00:14:13,134 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-21 00:14:13,136 INFO [RS:0;jenkins-hbase4:42163] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 00:14:13,139 INFO [RS:1;jenkins-hbase4:33545] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 00:14:13,139 INFO [RS:2;jenkins-hbase4:46101] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 00:14:13,143 INFO [RS:1;jenkins-hbase4:33545] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 00:14:13,143 INFO [RS:0;jenkins-hbase4:42163] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 00:14:13,143 INFO [RS:2;jenkins-hbase4:46101] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 00:14:13,143 INFO [RS:0;jenkins-hbase4:42163] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:13,143 INFO [RS:1;jenkins-hbase4:33545] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:13,144 INFO [RS:2;jenkins-hbase4:46101] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:13,145 INFO [RS:0;jenkins-hbase4:42163] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 00:14:13,145 INFO [RS:1;jenkins-hbase4:33545] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 00:14:13,145 INFO [RS:2;jenkins-hbase4:46101] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 00:14:13,155 INFO [RS:1;jenkins-hbase4:33545] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:13,155 INFO [RS:0;jenkins-hbase4:42163] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:13,155 INFO [RS:2;jenkins-hbase4:46101] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:13,155 DEBUG [RS:1;jenkins-hbase4:33545] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:13,155 DEBUG [RS:0;jenkins-hbase4:42163] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:13,156 DEBUG [RS:2;jenkins-hbase4:46101] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:13,156 DEBUG [RS:0;jenkins-hbase4:42163] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:13,156 DEBUG [RS:1;jenkins-hbase4:33545] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:13,156 DEBUG [RS:0;jenkins-hbase4:42163] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:13,156 DEBUG [RS:1;jenkins-hbase4:33545] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:13,156 DEBUG [RS:0;jenkins-hbase4:42163] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:13,156 DEBUG [RS:1;jenkins-hbase4:33545] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:13,156 DEBUG [RS:0;jenkins-hbase4:42163] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:13,156 DEBUG [RS:1;jenkins-hbase4:33545] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:13,156 DEBUG [RS:0;jenkins-hbase4:42163] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-21 00:14:13,156 DEBUG [RS:1;jenkins-hbase4:33545] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-21 00:14:13,157 DEBUG [RS:0;jenkins-hbase4:42163] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:13,157 DEBUG [RS:1;jenkins-hbase4:33545] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:13,156 DEBUG [RS:2;jenkins-hbase4:46101] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:13,157 DEBUG [RS:1;jenkins-hbase4:33545] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:13,157 DEBUG [RS:0;jenkins-hbase4:42163] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:13,157 DEBUG [RS:1;jenkins-hbase4:33545] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:13,157 DEBUG [RS:0;jenkins-hbase4:42163] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:13,157 DEBUG [RS:1;jenkins-hbase4:33545] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:13,157 DEBUG [RS:0;jenkins-hbase4:42163] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:13,157 DEBUG [RS:2;jenkins-hbase4:46101] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:13,158 DEBUG [RS:2;jenkins-hbase4:46101] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:13,159 DEBUG [RS:2;jenkins-hbase4:46101] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:13,159 DEBUG [RS:2;jenkins-hbase4:46101] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-21 00:14:13,159 DEBUG [RS:2;jenkins-hbase4:46101] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:13,159 INFO [RS:0;jenkins-hbase4:42163] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:13,159 DEBUG [RS:2;jenkins-hbase4:46101] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:13,159 INFO [RS:0;jenkins-hbase4:42163] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:13,159 DEBUG [RS:2;jenkins-hbase4:46101] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:13,159 INFO [RS:0;jenkins-hbase4:42163] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:13,159 DEBUG [RS:2;jenkins-hbase4:46101] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:13,159 INFO [RS:1;jenkins-hbase4:33545] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:13,160 INFO [RS:1;jenkins-hbase4:33545] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:13,160 INFO [RS:1;jenkins-hbase4:33545] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:13,166 INFO [RS:2;jenkins-hbase4:46101] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:13,167 INFO [RS:2;jenkins-hbase4:46101] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:13,167 INFO [RS:2;jenkins-hbase4:46101] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:13,181 INFO [RS:0;jenkins-hbase4:42163] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 00:14:13,181 INFO [RS:1;jenkins-hbase4:33545] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 00:14:13,181 INFO [RS:2;jenkins-hbase4:46101] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 00:14:13,186 INFO [RS:1;jenkins-hbase4:33545] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33545,1689898450890-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:13,186 INFO [RS:2;jenkins-hbase4:46101] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46101,1689898451098-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:13,186 INFO [RS:0;jenkins-hbase4:42163] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42163,1689898450682-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:13,212 INFO [RS:1;jenkins-hbase4:33545] regionserver.Replication(203): jenkins-hbase4.apache.org,33545,1689898450890 started 2023-07-21 00:14:13,212 INFO [RS:0;jenkins-hbase4:42163] regionserver.Replication(203): jenkins-hbase4.apache.org,42163,1689898450682 started 2023-07-21 00:14:13,257 INFO [RS:2;jenkins-hbase4:46101] regionserver.Replication(203): jenkins-hbase4.apache.org,46101,1689898451098 started 2023-07-21 00:14:13,212 INFO [RS:1;jenkins-hbase4:33545] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,33545,1689898450890, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:33545, sessionid=0x101853a75f20002 2023-07-21 00:14:13,257 INFO [RS:2;jenkins-hbase4:46101] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,46101,1689898451098, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:46101, sessionid=0x101853a75f20003 2023-07-21 00:14:13,257 INFO [RS:0;jenkins-hbase4:42163] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,42163,1689898450682, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:42163, sessionid=0x101853a75f20001 2023-07-21 00:14:13,258 DEBUG [RS:2;jenkins-hbase4:46101] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 00:14:13,258 DEBUG [RS:0;jenkins-hbase4:42163] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 00:14:13,259 DEBUG [RS:0;jenkins-hbase4:42163] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,42163,1689898450682 2023-07-21 00:14:13,259 DEBUG [RS:0;jenkins-hbase4:42163] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42163,1689898450682' 2023-07-21 00:14:13,259 DEBUG [RS:0;jenkins-hbase4:42163] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 00:14:13,258 DEBUG [RS:1;jenkins-hbase4:33545] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 00:14:13,260 DEBUG [RS:1;jenkins-hbase4:33545] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,33545,1689898450890 2023-07-21 00:14:13,259 DEBUG [RS:2;jenkins-hbase4:46101] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,46101,1689898451098 2023-07-21 00:14:13,261 DEBUG [RS:2;jenkins-hbase4:46101] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46101,1689898451098' 2023-07-21 00:14:13,261 DEBUG [RS:2;jenkins-hbase4:46101] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 00:14:13,260 DEBUG [RS:1;jenkins-hbase4:33545] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33545,1689898450890' 2023-07-21 00:14:13,261 DEBUG [RS:1;jenkins-hbase4:33545] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 00:14:13,263 DEBUG [RS:0;jenkins-hbase4:42163] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 00:14:13,263 DEBUG [RS:1;jenkins-hbase4:33545] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 00:14:13,263 DEBUG [RS:2;jenkins-hbase4:46101] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 00:14:13,266 DEBUG [RS:0;jenkins-hbase4:42163] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 00:14:13,266 DEBUG [RS:0;jenkins-hbase4:42163] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 00:14:13,266 DEBUG [RS:1;jenkins-hbase4:33545] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 00:14:13,266 DEBUG [RS:0;jenkins-hbase4:42163] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,42163,1689898450682 2023-07-21 00:14:13,266 DEBUG [RS:2;jenkins-hbase4:46101] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 00:14:13,266 DEBUG [RS:1;jenkins-hbase4:33545] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 00:14:13,267 DEBUG [RS:2;jenkins-hbase4:46101] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 00:14:13,266 DEBUG [RS:0;jenkins-hbase4:42163] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42163,1689898450682' 2023-07-21 00:14:13,267 DEBUG [RS:2;jenkins-hbase4:46101] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,46101,1689898451098 2023-07-21 00:14:13,267 DEBUG [RS:1;jenkins-hbase4:33545] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,33545,1689898450890 2023-07-21 00:14:13,267 DEBUG [RS:1;jenkins-hbase4:33545] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33545,1689898450890' 2023-07-21 00:14:13,267 DEBUG [RS:1;jenkins-hbase4:33545] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 00:14:13,267 DEBUG [RS:2;jenkins-hbase4:46101] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46101,1689898451098' 2023-07-21 00:14:13,267 DEBUG [RS:2;jenkins-hbase4:46101] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 00:14:13,267 DEBUG [RS:0;jenkins-hbase4:42163] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 00:14:13,268 DEBUG [RS:1;jenkins-hbase4:33545] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 00:14:13,268 DEBUG [RS:2;jenkins-hbase4:46101] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 00:14:13,268 DEBUG [RS:0;jenkins-hbase4:42163] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 00:14:13,268 DEBUG [RS:1;jenkins-hbase4:33545] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 00:14:13,268 INFO [RS:1;jenkins-hbase4:33545] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 00:14:13,268 INFO [RS:1;jenkins-hbase4:33545] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 00:14:13,268 DEBUG [RS:2;jenkins-hbase4:46101] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 00:14:13,269 DEBUG [RS:0;jenkins-hbase4:42163] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 00:14:13,269 INFO [RS:2;jenkins-hbase4:46101] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 00:14:13,269 INFO [RS:0;jenkins-hbase4:42163] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 00:14:13,269 INFO [RS:0;jenkins-hbase4:42163] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 00:14:13,269 INFO [RS:2;jenkins-hbase4:46101] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 00:14:13,286 DEBUG [jenkins-hbase4:33855] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-21 00:14:13,301 DEBUG [jenkins-hbase4:33855] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 00:14:13,302 DEBUG [jenkins-hbase4:33855] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 00:14:13,302 DEBUG [jenkins-hbase4:33855] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 00:14:13,302 DEBUG [jenkins-hbase4:33855] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 00:14:13,302 DEBUG [jenkins-hbase4:33855] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 00:14:13,307 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46101,1689898451098, state=OPENING 2023-07-21 00:14:13,314 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-21 00:14:13,316 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): master:33855-0x101853a75f20000, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 00:14:13,317 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 00:14:13,320 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46101,1689898451098}] 2023-07-21 00:14:13,381 INFO [RS:1;jenkins-hbase4:33545] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33545%2C1689898450890, suffix=, logDir=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/WALs/jenkins-hbase4.apache.org,33545,1689898450890, archiveDir=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/oldWALs, maxLogs=32 2023-07-21 00:14:13,381 INFO [RS:0;jenkins-hbase4:42163] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42163%2C1689898450682, suffix=, logDir=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/WALs/jenkins-hbase4.apache.org,42163,1689898450682, archiveDir=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/oldWALs, maxLogs=32 2023-07-21 00:14:13,384 INFO [RS:2;jenkins-hbase4:46101] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46101%2C1689898451098, suffix=, logDir=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/WALs/jenkins-hbase4.apache.org,46101,1689898451098, archiveDir=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/oldWALs, maxLogs=32 2023-07-21 00:14:13,416 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42719,DS-a3784960-4f9c-4c2b-82ba-c13e483fd9aa,DISK] 2023-07-21 00:14:13,417 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39689,DS-f7b638e4-1e18-4db1-a399-9895eb93a2ca,DISK] 2023-07-21 00:14:13,423 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42719,DS-a3784960-4f9c-4c2b-82ba-c13e483fd9aa,DISK] 2023-07-21 00:14:13,423 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39689,DS-f7b638e4-1e18-4db1-a399-9895eb93a2ca,DISK] 2023-07-21 00:14:13,423 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34811,DS-6921345a-75ee-43a8-af20-66801e0c34f1,DISK] 2023-07-21 00:14:13,423 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39689,DS-f7b638e4-1e18-4db1-a399-9895eb93a2ca,DISK] 2023-07-21 00:14:13,424 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42719,DS-a3784960-4f9c-4c2b-82ba-c13e483fd9aa,DISK] 2023-07-21 00:14:13,424 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34811,DS-6921345a-75ee-43a8-af20-66801e0c34f1,DISK] 2023-07-21 00:14:13,425 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34811,DS-6921345a-75ee-43a8-af20-66801e0c34f1,DISK] 2023-07-21 00:14:13,437 INFO [RS:1;jenkins-hbase4:33545] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/WALs/jenkins-hbase4.apache.org,33545,1689898450890/jenkins-hbase4.apache.org%2C33545%2C1689898450890.1689898453390 2023-07-21 00:14:13,439 INFO [RS:2;jenkins-hbase4:46101] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/WALs/jenkins-hbase4.apache.org,46101,1689898451098/jenkins-hbase4.apache.org%2C46101%2C1689898451098.1689898453390 2023-07-21 00:14:13,440 DEBUG [RS:1;jenkins-hbase4:33545] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42719,DS-a3784960-4f9c-4c2b-82ba-c13e483fd9aa,DISK], DatanodeInfoWithStorage[127.0.0.1:39689,DS-f7b638e4-1e18-4db1-a399-9895eb93a2ca,DISK], DatanodeInfoWithStorage[127.0.0.1:34811,DS-6921345a-75ee-43a8-af20-66801e0c34f1,DISK]] 2023-07-21 00:14:13,440 INFO [RS:0;jenkins-hbase4:42163] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/WALs/jenkins-hbase4.apache.org,42163,1689898450682/jenkins-hbase4.apache.org%2C42163%2C1689898450682.1689898453390 2023-07-21 00:14:13,444 DEBUG [RS:2;jenkins-hbase4:46101] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34811,DS-6921345a-75ee-43a8-af20-66801e0c34f1,DISK], DatanodeInfoWithStorage[127.0.0.1:39689,DS-f7b638e4-1e18-4db1-a399-9895eb93a2ca,DISK], DatanodeInfoWithStorage[127.0.0.1:42719,DS-a3784960-4f9c-4c2b-82ba-c13e483fd9aa,DISK]] 2023-07-21 00:14:13,444 DEBUG [RS:0;jenkins-hbase4:42163] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42719,DS-a3784960-4f9c-4c2b-82ba-c13e483fd9aa,DISK], DatanodeInfoWithStorage[127.0.0.1:39689,DS-f7b638e4-1e18-4db1-a399-9895eb93a2ca,DISK], DatanodeInfoWithStorage[127.0.0.1:34811,DS-6921345a-75ee-43a8-af20-66801e0c34f1,DISK]] 2023-07-21 00:14:13,495 WARN [ReadOnlyZKClient-127.0.0.1:60276@0x007d59e8] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-21 00:14:13,511 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,46101,1689898451098 2023-07-21 00:14:13,514 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 00:14:13,518 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55704, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 00:14:13,531 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33855,1689898448530] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 00:14:13,531 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-21 00:14:13,535 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 00:14:13,542 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55720, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 00:14:13,543 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46101%2C1689898451098.meta, suffix=.meta, logDir=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/WALs/jenkins-hbase4.apache.org,46101,1689898451098, archiveDir=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/oldWALs, maxLogs=32 2023-07-21 00:14:13,543 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=46101] ipc.CallRunner(144): callId: 1 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:55720 deadline: 1689898513543, exception=org.apache.hadoop.hbase.exceptions.RegionOpeningException: Region hbase:meta,,1 is opening on jenkins-hbase4.apache.org,46101,1689898451098 2023-07-21 00:14:13,572 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42719,DS-a3784960-4f9c-4c2b-82ba-c13e483fd9aa,DISK] 2023-07-21 00:14:13,592 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34811,DS-6921345a-75ee-43a8-af20-66801e0c34f1,DISK] 2023-07-21 00:14:13,592 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39689,DS-f7b638e4-1e18-4db1-a399-9895eb93a2ca,DISK] 2023-07-21 00:14:13,668 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/WALs/jenkins-hbase4.apache.org,46101,1689898451098/jenkins-hbase4.apache.org%2C46101%2C1689898451098.meta.1689898453544.meta 2023-07-21 00:14:13,671 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34811,DS-6921345a-75ee-43a8-af20-66801e0c34f1,DISK], DatanodeInfoWithStorage[127.0.0.1:42719,DS-a3784960-4f9c-4c2b-82ba-c13e483fd9aa,DISK], DatanodeInfoWithStorage[127.0.0.1:39689,DS-f7b638e4-1e18-4db1-a399-9895eb93a2ca,DISK]] 2023-07-21 00:14:13,672 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-21 00:14:13,674 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 00:14:13,677 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-21 00:14:13,681 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-21 00:14:13,688 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-21 00:14:13,689 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:13,689 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-21 00:14:13,689 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-21 00:14:13,755 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 00:14:13,758 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/hbase/meta/1588230740/info 2023-07-21 00:14:13,758 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/hbase/meta/1588230740/info 2023-07-21 00:14:13,759 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 00:14:13,760 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:13,761 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 00:14:13,765 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/hbase/meta/1588230740/rep_barrier 2023-07-21 00:14:13,765 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/hbase/meta/1588230740/rep_barrier 2023-07-21 00:14:13,768 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 00:14:13,771 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:13,771 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 00:14:13,774 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/hbase/meta/1588230740/table 2023-07-21 00:14:13,774 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/hbase/meta/1588230740/table 2023-07-21 00:14:13,775 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 00:14:13,776 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:13,778 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/hbase/meta/1588230740 2023-07-21 00:14:13,785 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/hbase/meta/1588230740 2023-07-21 00:14:13,792 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 00:14:13,795 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 00:14:13,801 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10165661760, jitterRate=-0.053248971700668335}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 00:14:13,802 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 00:14:13,819 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689898453503 2023-07-21 00:14:13,846 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-21 00:14:13,847 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-21 00:14:13,848 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46101,1689898451098, state=OPEN 2023-07-21 00:14:13,851 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): master:33855-0x101853a75f20000, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 00:14:13,851 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 00:14:13,865 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-21 00:14:13,866 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46101,1689898451098 in 531 msec 2023-07-21 00:14:13,875 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-21 00:14:13,875 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 754 msec 2023-07-21 00:14:13,881 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 1.1500 sec 2023-07-21 00:14:13,881 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689898453881, completionTime=-1 2023-07-21 00:14:13,881 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-21 00:14:13,882 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-21 00:14:13,938 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-21 00:14:13,939 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689898513939 2023-07-21 00:14:13,939 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689898573939 2023-07-21 00:14:13,939 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 57 msec 2023-07-21 00:14:13,983 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33855,1689898448530-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:13,984 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33855,1689898448530-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:13,984 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33855,1689898448530-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:13,987 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:33855, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:13,987 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:14,032 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-21 00:14:14,043 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-21 00:14:14,046 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-21 00:14:14,064 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-21 00:14:14,073 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 00:14:14,087 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 00:14:14,088 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33855,1689898448530] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 00:14:14,095 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33855,1689898448530] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-21 00:14:14,097 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 00:14:14,102 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 00:14:14,167 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/hbase/namespace/b1518e854a007c33a819dec51b94a3c0 2023-07-21 00:14:14,168 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/hbase/rsgroup/8569b93240f0e75794ec901e80f2b563 2023-07-21 00:14:14,172 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/hbase/namespace/b1518e854a007c33a819dec51b94a3c0 empty. 2023-07-21 00:14:14,172 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/hbase/rsgroup/8569b93240f0e75794ec901e80f2b563 empty. 2023-07-21 00:14:14,172 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/hbase/namespace/b1518e854a007c33a819dec51b94a3c0 2023-07-21 00:14:14,172 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-21 00:14:14,173 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/hbase/rsgroup/8569b93240f0e75794ec901e80f2b563 2023-07-21 00:14:14,173 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-21 00:14:14,307 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-21 00:14:14,308 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-21 00:14:14,310 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 8569b93240f0e75794ec901e80f2b563, NAME => 'hbase:rsgroup,,1689898454088.8569b93240f0e75794ec901e80f2b563.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp 2023-07-21 00:14:14,311 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => b1518e854a007c33a819dec51b94a3c0, NAME => 'hbase:namespace,,1689898454045.b1518e854a007c33a819dec51b94a3c0.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp 2023-07-21 00:14:14,357 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689898454088.8569b93240f0e75794ec901e80f2b563.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:14,357 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689898454045.b1518e854a007c33a819dec51b94a3c0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:14,359 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 8569b93240f0e75794ec901e80f2b563, disabling compactions & flushes 2023-07-21 00:14:14,359 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing b1518e854a007c33a819dec51b94a3c0, disabling compactions & flushes 2023-07-21 00:14:14,359 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689898454088.8569b93240f0e75794ec901e80f2b563. 2023-07-21 00:14:14,359 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689898454045.b1518e854a007c33a819dec51b94a3c0. 2023-07-21 00:14:14,359 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689898454088.8569b93240f0e75794ec901e80f2b563. 2023-07-21 00:14:14,359 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689898454045.b1518e854a007c33a819dec51b94a3c0. 2023-07-21 00:14:14,359 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689898454088.8569b93240f0e75794ec901e80f2b563. after waiting 0 ms 2023-07-21 00:14:14,359 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689898454045.b1518e854a007c33a819dec51b94a3c0. after waiting 0 ms 2023-07-21 00:14:14,359 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689898454088.8569b93240f0e75794ec901e80f2b563. 2023-07-21 00:14:14,359 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689898454045.b1518e854a007c33a819dec51b94a3c0. 2023-07-21 00:14:14,359 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689898454088.8569b93240f0e75794ec901e80f2b563. 2023-07-21 00:14:14,359 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689898454045.b1518e854a007c33a819dec51b94a3c0. 2023-07-21 00:14:14,360 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 8569b93240f0e75794ec901e80f2b563: 2023-07-21 00:14:14,360 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for b1518e854a007c33a819dec51b94a3c0: 2023-07-21 00:14:14,367 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 00:14:14,367 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 00:14:14,387 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689898454088.8569b93240f0e75794ec901e80f2b563.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689898454370"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898454370"}]},"ts":"1689898454370"} 2023-07-21 00:14:14,387 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689898454045.b1518e854a007c33a819dec51b94a3c0.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689898454370"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898454370"}]},"ts":"1689898454370"} 2023-07-21 00:14:14,421 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 00:14:14,423 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 00:14:14,426 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 00:14:14,427 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 00:14:14,429 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689898454427"}]},"ts":"1689898454427"} 2023-07-21 00:14:14,429 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689898454424"}]},"ts":"1689898454424"} 2023-07-21 00:14:14,432 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-21 00:14:14,433 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-21 00:14:14,438 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 00:14:14,438 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 00:14:14,438 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 00:14:14,438 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 00:14:14,439 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 00:14:14,442 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=b1518e854a007c33a819dec51b94a3c0, ASSIGN}] 2023-07-21 00:14:14,442 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 00:14:14,442 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 00:14:14,442 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 00:14:14,442 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 00:14:14,442 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 00:14:14,443 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=8569b93240f0e75794ec901e80f2b563, ASSIGN}] 2023-07-21 00:14:14,447 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=b1518e854a007c33a819dec51b94a3c0, ASSIGN 2023-07-21 00:14:14,448 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=8569b93240f0e75794ec901e80f2b563, ASSIGN 2023-07-21 00:14:14,449 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=8569b93240f0e75794ec901e80f2b563, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46101,1689898451098; forceNewPlan=false, retain=false 2023-07-21 00:14:14,449 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=b1518e854a007c33a819dec51b94a3c0, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46101,1689898451098; forceNewPlan=false, retain=false 2023-07-21 00:14:14,450 INFO [jenkins-hbase4:33855] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-21 00:14:14,453 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=8569b93240f0e75794ec901e80f2b563, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46101,1689898451098 2023-07-21 00:14:14,453 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=b1518e854a007c33a819dec51b94a3c0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46101,1689898451098 2023-07-21 00:14:14,453 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689898454088.8569b93240f0e75794ec901e80f2b563.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689898454453"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898454453"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898454453"}]},"ts":"1689898454453"} 2023-07-21 00:14:14,453 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689898454045.b1518e854a007c33a819dec51b94a3c0.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689898454453"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898454453"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898454453"}]},"ts":"1689898454453"} 2023-07-21 00:14:14,457 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=7, state=RUNNABLE; OpenRegionProcedure 8569b93240f0e75794ec901e80f2b563, server=jenkins-hbase4.apache.org,46101,1689898451098}] 2023-07-21 00:14:14,460 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=6, state=RUNNABLE; OpenRegionProcedure b1518e854a007c33a819dec51b94a3c0, server=jenkins-hbase4.apache.org,46101,1689898451098}] 2023-07-21 00:14:14,619 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689898454088.8569b93240f0e75794ec901e80f2b563. 2023-07-21 00:14:14,620 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8569b93240f0e75794ec901e80f2b563, NAME => 'hbase:rsgroup,,1689898454088.8569b93240f0e75794ec901e80f2b563.', STARTKEY => '', ENDKEY => ''} 2023-07-21 00:14:14,620 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 00:14:14,620 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689898454088.8569b93240f0e75794ec901e80f2b563. service=MultiRowMutationService 2023-07-21 00:14:14,621 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-21 00:14:14,622 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 8569b93240f0e75794ec901e80f2b563 2023-07-21 00:14:14,622 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689898454088.8569b93240f0e75794ec901e80f2b563.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:14,622 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8569b93240f0e75794ec901e80f2b563 2023-07-21 00:14:14,622 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8569b93240f0e75794ec901e80f2b563 2023-07-21 00:14:14,625 INFO [StoreOpener-8569b93240f0e75794ec901e80f2b563-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 8569b93240f0e75794ec901e80f2b563 2023-07-21 00:14:14,630 DEBUG [StoreOpener-8569b93240f0e75794ec901e80f2b563-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/hbase/rsgroup/8569b93240f0e75794ec901e80f2b563/m 2023-07-21 00:14:14,630 DEBUG [StoreOpener-8569b93240f0e75794ec901e80f2b563-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/hbase/rsgroup/8569b93240f0e75794ec901e80f2b563/m 2023-07-21 00:14:14,631 INFO [StoreOpener-8569b93240f0e75794ec901e80f2b563-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8569b93240f0e75794ec901e80f2b563 columnFamilyName m 2023-07-21 00:14:14,631 INFO [StoreOpener-8569b93240f0e75794ec901e80f2b563-1] regionserver.HStore(310): Store=8569b93240f0e75794ec901e80f2b563/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:14,633 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/hbase/rsgroup/8569b93240f0e75794ec901e80f2b563 2023-07-21 00:14:14,634 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/hbase/rsgroup/8569b93240f0e75794ec901e80f2b563 2023-07-21 00:14:14,640 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8569b93240f0e75794ec901e80f2b563 2023-07-21 00:14:14,643 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/hbase/rsgroup/8569b93240f0e75794ec901e80f2b563/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 00:14:14,644 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8569b93240f0e75794ec901e80f2b563; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@d4bffcc, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 00:14:14,644 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8569b93240f0e75794ec901e80f2b563: 2023-07-21 00:14:14,647 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689898454088.8569b93240f0e75794ec901e80f2b563., pid=8, masterSystemTime=1689898454612 2023-07-21 00:14:14,651 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689898454088.8569b93240f0e75794ec901e80f2b563. 2023-07-21 00:14:14,651 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689898454088.8569b93240f0e75794ec901e80f2b563. 2023-07-21 00:14:14,651 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689898454045.b1518e854a007c33a819dec51b94a3c0. 2023-07-21 00:14:14,651 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b1518e854a007c33a819dec51b94a3c0, NAME => 'hbase:namespace,,1689898454045.b1518e854a007c33a819dec51b94a3c0.', STARTKEY => '', ENDKEY => ''} 2023-07-21 00:14:14,652 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace b1518e854a007c33a819dec51b94a3c0 2023-07-21 00:14:14,652 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689898454045.b1518e854a007c33a819dec51b94a3c0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:14,652 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b1518e854a007c33a819dec51b94a3c0 2023-07-21 00:14:14,652 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b1518e854a007c33a819dec51b94a3c0 2023-07-21 00:14:14,653 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=8569b93240f0e75794ec901e80f2b563, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46101,1689898451098 2023-07-21 00:14:14,653 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689898454088.8569b93240f0e75794ec901e80f2b563.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689898454652"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689898454652"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689898454652"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689898454652"}]},"ts":"1689898454652"} 2023-07-21 00:14:14,654 INFO [StoreOpener-b1518e854a007c33a819dec51b94a3c0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region b1518e854a007c33a819dec51b94a3c0 2023-07-21 00:14:14,657 DEBUG [StoreOpener-b1518e854a007c33a819dec51b94a3c0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/hbase/namespace/b1518e854a007c33a819dec51b94a3c0/info 2023-07-21 00:14:14,657 DEBUG [StoreOpener-b1518e854a007c33a819dec51b94a3c0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/hbase/namespace/b1518e854a007c33a819dec51b94a3c0/info 2023-07-21 00:14:14,658 INFO [StoreOpener-b1518e854a007c33a819dec51b94a3c0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b1518e854a007c33a819dec51b94a3c0 columnFamilyName info 2023-07-21 00:14:14,659 INFO [StoreOpener-b1518e854a007c33a819dec51b94a3c0-1] regionserver.HStore(310): Store=b1518e854a007c33a819dec51b94a3c0/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:14,660 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/hbase/namespace/b1518e854a007c33a819dec51b94a3c0 2023-07-21 00:14:14,662 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/hbase/namespace/b1518e854a007c33a819dec51b94a3c0 2023-07-21 00:14:14,664 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=7 2023-07-21 00:14:14,664 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=7, state=SUCCESS; OpenRegionProcedure 8569b93240f0e75794ec901e80f2b563, server=jenkins-hbase4.apache.org,46101,1689898451098 in 201 msec 2023-07-21 00:14:14,667 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b1518e854a007c33a819dec51b94a3c0 2023-07-21 00:14:14,670 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-21 00:14:14,670 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=8569b93240f0e75794ec901e80f2b563, ASSIGN in 222 msec 2023-07-21 00:14:14,671 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 00:14:14,672 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689898454671"}]},"ts":"1689898454671"} 2023-07-21 00:14:14,673 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/hbase/namespace/b1518e854a007c33a819dec51b94a3c0/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 00:14:14,674 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b1518e854a007c33a819dec51b94a3c0; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9608510720, jitterRate=-0.10513770580291748}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 00:14:14,674 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b1518e854a007c33a819dec51b94a3c0: 2023-07-21 00:14:14,675 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-21 00:14:14,676 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689898454045.b1518e854a007c33a819dec51b94a3c0., pid=9, masterSystemTime=1689898454612 2023-07-21 00:14:14,679 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689898454045.b1518e854a007c33a819dec51b94a3c0. 2023-07-21 00:14:14,679 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689898454045.b1518e854a007c33a819dec51b94a3c0. 2023-07-21 00:14:14,683 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 00:14:14,685 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=b1518e854a007c33a819dec51b94a3c0, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46101,1689898451098 2023-07-21 00:14:14,686 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689898454045.b1518e854a007c33a819dec51b94a3c0.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689898454685"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689898454685"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689898454685"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689898454685"}]},"ts":"1689898454685"} 2023-07-21 00:14:14,689 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 596 msec 2023-07-21 00:14:14,701 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=6 2023-07-21 00:14:14,704 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=6, state=SUCCESS; OpenRegionProcedure b1518e854a007c33a819dec51b94a3c0, server=jenkins-hbase4.apache.org,46101,1689898451098 in 229 msec 2023-07-21 00:14:14,713 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=4 2023-07-21 00:14:14,714 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=b1518e854a007c33a819dec51b94a3c0, ASSIGN in 262 msec 2023-07-21 00:14:14,715 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 00:14:14,716 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689898454716"}]},"ts":"1689898454716"} 2023-07-21 00:14:14,719 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-21 00:14:14,723 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 00:14:14,730 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 676 msec 2023-07-21 00:14:14,738 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33855,1689898448530] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-21 00:14:14,738 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33855,1689898448530] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-21 00:14:14,775 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33855-0x101853a75f20000, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-21 00:14:14,776 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): master:33855-0x101853a75f20000, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-21 00:14:14,776 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): master:33855-0x101853a75f20000, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 00:14:14,824 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-21 00:14:14,839 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): master:33855-0x101853a75f20000, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 00:14:14,845 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 45 msec 2023-07-21 00:14:14,855 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): master:33855-0x101853a75f20000, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 00:14:14,856 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33855,1689898448530] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:14,858 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-21 00:14:14,860 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33855,1689898448530] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 00:14:14,865 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33855,1689898448530] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-21 00:14:14,870 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): master:33855-0x101853a75f20000, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 00:14:14,879 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 17 msec 2023-07-21 00:14:14,896 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): master:33855-0x101853a75f20000, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-21 00:14:14,899 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): master:33855-0x101853a75f20000, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-21 00:14:14,900 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 3.522sec 2023-07-21 00:14:14,902 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-21 00:14:14,904 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-21 00:14:14,904 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-21 00:14:14,905 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33855,1689898448530-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-21 00:14:14,906 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33855,1689898448530-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-21 00:14:14,914 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-21 00:14:14,978 DEBUG [Listener at localhost/41495] zookeeper.ReadOnlyZKClient(139): Connect 0x48b9c2bd to 127.0.0.1:60276 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 00:14:14,985 DEBUG [Listener at localhost/41495] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@38693439, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 00:14:15,004 DEBUG [hconnection-0x7db6cade-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 00:14:15,025 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55736, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 00:14:15,036 INFO [Listener at localhost/41495] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,33855,1689898448530 2023-07-21 00:14:15,038 INFO [Listener at localhost/41495] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:15,048 DEBUG [Listener at localhost/41495] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-21 00:14:15,059 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44654, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-21 00:14:15,078 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): master:33855-0x101853a75f20000, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-21 00:14:15,078 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): master:33855-0x101853a75f20000, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 00:14:15,079 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-21 00:14:15,085 DEBUG [Listener at localhost/41495] zookeeper.ReadOnlyZKClient(139): Connect 0x67598398 to 127.0.0.1:60276 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 00:14:15,097 DEBUG [Listener at localhost/41495] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6908e6c5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 00:14:15,097 INFO [Listener at localhost/41495] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:60276 2023-07-21 00:14:15,101 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 00:14:15,103 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x101853a75f2000a connected 2023-07-21 00:14:15,141 INFO [Listener at localhost/41495] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=419, OpenFileDescriptor=689, MaxFileDescriptor=60000, SystemLoadAverage=480, ProcessCount=177, AvailableMemoryMB=3245 2023-07-21 00:14:15,144 INFO [Listener at localhost/41495] rsgroup.TestRSGroupsBase(132): testTableMoveTruncateAndDrop 2023-07-21 00:14:15,175 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:15,177 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:15,223 INFO [Listener at localhost/41495] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-21 00:14:15,242 INFO [Listener at localhost/41495] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-21 00:14:15,242 INFO [Listener at localhost/41495] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 00:14:15,242 INFO [Listener at localhost/41495] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 00:14:15,242 INFO [Listener at localhost/41495] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 00:14:15,243 INFO [Listener at localhost/41495] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 00:14:15,243 INFO [Listener at localhost/41495] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 00:14:15,243 INFO [Listener at localhost/41495] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 00:14:15,247 INFO [Listener at localhost/41495] ipc.NettyRpcServer(120): Bind to /172.31.14.131:43987 2023-07-21 00:14:15,247 INFO [Listener at localhost/41495] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 00:14:15,249 DEBUG [Listener at localhost/41495] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 00:14:15,250 INFO [Listener at localhost/41495] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 00:14:15,256 INFO [Listener at localhost/41495] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 00:14:15,259 INFO [Listener at localhost/41495] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43987 connecting to ZooKeeper ensemble=127.0.0.1:60276 2023-07-21 00:14:15,264 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): regionserver:439870x0, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 00:14:15,265 DEBUG [Listener at localhost/41495] zookeeper.ZKUtil(162): regionserver:439870x0, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 00:14:15,266 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43987-0x101853a75f2000b connected 2023-07-21 00:14:15,267 DEBUG [Listener at localhost/41495] zookeeper.ZKUtil(162): regionserver:43987-0x101853a75f2000b, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-21 00:14:15,268 DEBUG [Listener at localhost/41495] zookeeper.ZKUtil(164): regionserver:43987-0x101853a75f2000b, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 00:14:15,268 DEBUG [Listener at localhost/41495] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43987 2023-07-21 00:14:15,269 DEBUG [Listener at localhost/41495] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43987 2023-07-21 00:14:15,269 DEBUG [Listener at localhost/41495] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43987 2023-07-21 00:14:15,270 DEBUG [Listener at localhost/41495] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43987 2023-07-21 00:14:15,271 DEBUG [Listener at localhost/41495] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43987 2023-07-21 00:14:15,273 INFO [Listener at localhost/41495] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 00:14:15,273 INFO [Listener at localhost/41495] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 00:14:15,273 INFO [Listener at localhost/41495] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 00:14:15,273 INFO [Listener at localhost/41495] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 00:14:15,274 INFO [Listener at localhost/41495] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 00:14:15,274 INFO [Listener at localhost/41495] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 00:14:15,274 INFO [Listener at localhost/41495] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 00:14:15,274 INFO [Listener at localhost/41495] http.HttpServer(1146): Jetty bound to port 45519 2023-07-21 00:14:15,275 INFO [Listener at localhost/41495] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 00:14:15,279 INFO [Listener at localhost/41495] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 00:14:15,279 INFO [Listener at localhost/41495] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@f357cde{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/hadoop.log.dir/,AVAILABLE} 2023-07-21 00:14:15,280 INFO [Listener at localhost/41495] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 00:14:15,280 INFO [Listener at localhost/41495] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@68aba549{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 00:14:15,405 INFO [Listener at localhost/41495] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 00:14:15,407 INFO [Listener at localhost/41495] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 00:14:15,407 INFO [Listener at localhost/41495] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 00:14:15,407 INFO [Listener at localhost/41495] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 00:14:15,409 INFO [Listener at localhost/41495] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 00:14:15,410 INFO [Listener at localhost/41495] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@16958b7c{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/java.io.tmpdir/jetty-0_0_0_0-45519-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5963812577186299842/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 00:14:15,412 INFO [Listener at localhost/41495] server.AbstractConnector(333): Started ServerConnector@1d8aa3aa{HTTP/1.1, (http/1.1)}{0.0.0.0:45519} 2023-07-21 00:14:15,412 INFO [Listener at localhost/41495] server.Server(415): Started @13224ms 2023-07-21 00:14:15,424 INFO [RS:3;jenkins-hbase4:43987] regionserver.HRegionServer(951): ClusterId : f9b7e694-e773-4815-bf14-1a8b338e3705 2023-07-21 00:14:15,425 DEBUG [RS:3;jenkins-hbase4:43987] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 00:14:15,427 DEBUG [RS:3;jenkins-hbase4:43987] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 00:14:15,427 DEBUG [RS:3;jenkins-hbase4:43987] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 00:14:15,433 DEBUG [RS:3;jenkins-hbase4:43987] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 00:14:15,435 DEBUG [RS:3;jenkins-hbase4:43987] zookeeper.ReadOnlyZKClient(139): Connect 0x1ebd84fa to 127.0.0.1:60276 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 00:14:15,452 DEBUG [RS:3;jenkins-hbase4:43987] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@107a795d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 00:14:15,452 DEBUG [RS:3;jenkins-hbase4:43987] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1445ef8a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-21 00:14:15,465 DEBUG [RS:3;jenkins-hbase4:43987] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:43987 2023-07-21 00:14:15,465 INFO [RS:3;jenkins-hbase4:43987] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 00:14:15,465 INFO [RS:3;jenkins-hbase4:43987] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 00:14:15,465 DEBUG [RS:3;jenkins-hbase4:43987] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 00:14:15,466 INFO [RS:3;jenkins-hbase4:43987] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,33855,1689898448530 with isa=jenkins-hbase4.apache.org/172.31.14.131:43987, startcode=1689898455241 2023-07-21 00:14:15,467 DEBUG [RS:3;jenkins-hbase4:43987] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 00:14:15,472 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40765, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 00:14:15,473 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33855] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,43987,1689898455241 2023-07-21 00:14:15,473 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33855,1689898448530] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 00:14:15,474 DEBUG [RS:3;jenkins-hbase4:43987] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336 2023-07-21 00:14:15,474 DEBUG [RS:3;jenkins-hbase4:43987] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:36751 2023-07-21 00:14:15,474 DEBUG [RS:3;jenkins-hbase4:43987] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=46219 2023-07-21 00:14:15,479 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): master:33855-0x101853a75f20000, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 00:14:15,479 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): regionserver:42163-0x101853a75f20001, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 00:14:15,479 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33855,1689898448530] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:15,479 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): regionserver:33545-0x101853a75f20002, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 00:14:15,479 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): regionserver:46101-0x101853a75f20003, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 00:14:15,480 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,43987,1689898455241] 2023-07-21 00:14:15,480 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33855,1689898448530] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 00:14:15,480 DEBUG [RS:3;jenkins-hbase4:43987] zookeeper.ZKUtil(162): regionserver:43987-0x101853a75f2000b, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43987,1689898455241 2023-07-21 00:14:15,480 WARN [RS:3;jenkins-hbase4:43987] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 00:14:15,480 INFO [RS:3;jenkins-hbase4:43987] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 00:14:15,480 DEBUG [RS:3;jenkins-hbase4:43987] regionserver.HRegionServer(1948): logDir=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/WALs/jenkins-hbase4.apache.org,43987,1689898455241 2023-07-21 00:14:15,484 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33545-0x101853a75f20002, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33545,1689898450890 2023-07-21 00:14:15,484 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42163-0x101853a75f20001, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33545,1689898450890 2023-07-21 00:14:15,485 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33855,1689898448530] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-21 00:14:15,485 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46101-0x101853a75f20003, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33545,1689898450890 2023-07-21 00:14:15,485 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42163-0x101853a75f20001, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46101,1689898451098 2023-07-21 00:14:15,487 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33545-0x101853a75f20002, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46101,1689898451098 2023-07-21 00:14:15,487 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46101-0x101853a75f20003, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46101,1689898451098 2023-07-21 00:14:15,487 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42163-0x101853a75f20001, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42163,1689898450682 2023-07-21 00:14:15,487 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46101-0x101853a75f20003, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42163,1689898450682 2023-07-21 00:14:15,487 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33545-0x101853a75f20002, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42163,1689898450682 2023-07-21 00:14:15,487 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42163-0x101853a75f20001, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43987,1689898455241 2023-07-21 00:14:15,489 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46101-0x101853a75f20003, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43987,1689898455241 2023-07-21 00:14:15,490 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33545-0x101853a75f20002, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43987,1689898455241 2023-07-21 00:14:15,491 DEBUG [RS:3;jenkins-hbase4:43987] zookeeper.ZKUtil(162): regionserver:43987-0x101853a75f2000b, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33545,1689898450890 2023-07-21 00:14:15,492 DEBUG [RS:3;jenkins-hbase4:43987] zookeeper.ZKUtil(162): regionserver:43987-0x101853a75f2000b, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46101,1689898451098 2023-07-21 00:14:15,492 DEBUG [RS:3;jenkins-hbase4:43987] zookeeper.ZKUtil(162): regionserver:43987-0x101853a75f2000b, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42163,1689898450682 2023-07-21 00:14:15,493 DEBUG [RS:3;jenkins-hbase4:43987] zookeeper.ZKUtil(162): regionserver:43987-0x101853a75f2000b, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43987,1689898455241 2023-07-21 00:14:15,494 DEBUG [RS:3;jenkins-hbase4:43987] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 00:14:15,494 INFO [RS:3;jenkins-hbase4:43987] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 00:14:15,499 INFO [RS:3;jenkins-hbase4:43987] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 00:14:15,499 INFO [RS:3;jenkins-hbase4:43987] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 00:14:15,499 INFO [RS:3;jenkins-hbase4:43987] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:15,499 INFO [RS:3;jenkins-hbase4:43987] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 00:14:15,502 INFO [RS:3;jenkins-hbase4:43987] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:15,502 DEBUG [RS:3;jenkins-hbase4:43987] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:15,502 DEBUG [RS:3;jenkins-hbase4:43987] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:15,502 DEBUG [RS:3;jenkins-hbase4:43987] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:15,502 DEBUG [RS:3;jenkins-hbase4:43987] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:15,502 DEBUG [RS:3;jenkins-hbase4:43987] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:15,502 DEBUG [RS:3;jenkins-hbase4:43987] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-21 00:14:15,503 DEBUG [RS:3;jenkins-hbase4:43987] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:15,503 DEBUG [RS:3;jenkins-hbase4:43987] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:15,503 DEBUG [RS:3;jenkins-hbase4:43987] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:15,503 DEBUG [RS:3;jenkins-hbase4:43987] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:15,505 INFO [RS:3;jenkins-hbase4:43987] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:15,505 INFO [RS:3;jenkins-hbase4:43987] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:15,505 INFO [RS:3;jenkins-hbase4:43987] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:15,524 INFO [RS:3;jenkins-hbase4:43987] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 00:14:15,524 INFO [RS:3;jenkins-hbase4:43987] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43987,1689898455241-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:15,541 INFO [RS:3;jenkins-hbase4:43987] regionserver.Replication(203): jenkins-hbase4.apache.org,43987,1689898455241 started 2023-07-21 00:14:15,541 INFO [RS:3;jenkins-hbase4:43987] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,43987,1689898455241, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:43987, sessionid=0x101853a75f2000b 2023-07-21 00:14:15,541 DEBUG [RS:3;jenkins-hbase4:43987] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 00:14:15,541 DEBUG [RS:3;jenkins-hbase4:43987] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,43987,1689898455241 2023-07-21 00:14:15,541 DEBUG [RS:3;jenkins-hbase4:43987] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43987,1689898455241' 2023-07-21 00:14:15,541 DEBUG [RS:3;jenkins-hbase4:43987] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 00:14:15,542 DEBUG [RS:3;jenkins-hbase4:43987] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 00:14:15,543 DEBUG [RS:3;jenkins-hbase4:43987] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 00:14:15,543 DEBUG [RS:3;jenkins-hbase4:43987] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 00:14:15,543 DEBUG [RS:3;jenkins-hbase4:43987] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,43987,1689898455241 2023-07-21 00:14:15,543 DEBUG [RS:3;jenkins-hbase4:43987] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43987,1689898455241' 2023-07-21 00:14:15,543 DEBUG [RS:3;jenkins-hbase4:43987] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 00:14:15,543 DEBUG [RS:3;jenkins-hbase4:43987] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 00:14:15,544 DEBUG [RS:3;jenkins-hbase4:43987] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 00:14:15,544 INFO [RS:3;jenkins-hbase4:43987] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 00:14:15,544 INFO [RS:3;jenkins-hbase4:43987] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 00:14:15,548 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 00:14:15,552 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:15,552 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:15,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 00:14:15,560 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 00:14:15,566 DEBUG [hconnection-0x5a6bf6db-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 00:14:15,569 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55750, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 00:14:15,579 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:15,579 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:15,591 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33855] to rsgroup master 2023-07-21 00:14:15,591 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 00:14:15,591 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:44654 deadline: 1689899655590, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. 2023-07-21 00:14:15,592 WARN [Listener at localhost/41495] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 00:14:15,596 INFO [Listener at localhost/41495] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:15,598 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:15,598 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:15,599 INFO [Listener at localhost/41495] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33545, jenkins-hbase4.apache.org:42163, jenkins-hbase4.apache.org:43987, jenkins-hbase4.apache.org:46101], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 00:14:15,605 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 00:14:15,605 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:15,607 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 00:14:15,607 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:15,608 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testTableMoveTruncateAndDrop_761444744 2023-07-21 00:14:15,612 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_761444744 2023-07-21 00:14:15,614 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:15,615 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:15,615 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 00:14:15,619 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 00:14:15,624 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:15,624 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:15,628 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42163, jenkins-hbase4.apache.org:33545] to rsgroup Group_testTableMoveTruncateAndDrop_761444744 2023-07-21 00:14:15,631 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_761444744 2023-07-21 00:14:15,632 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:15,633 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:15,633 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 00:14:15,638 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-21 00:14:15,639 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33545,1689898450890, jenkins-hbase4.apache.org,42163,1689898450682] are moved back to default 2023-07-21 00:14:15,639 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testTableMoveTruncateAndDrop_761444744 2023-07-21 00:14:15,639 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 00:14:15,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:15,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:15,652 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_761444744 2023-07-21 00:14:15,653 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:15,659 INFO [RS:3;jenkins-hbase4:43987] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43987%2C1689898455241, suffix=, logDir=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/WALs/jenkins-hbase4.apache.org,43987,1689898455241, archiveDir=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/oldWALs, maxLogs=32 2023-07-21 00:14:15,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 00:14:15,681 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 00:14:15,685 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 00:14:15,691 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_761444744 2023-07-21 00:14:15,691 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testTableMoveTruncateAndDrop" procId is: 12 2023-07-21 00:14:15,692 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:15,692 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:15,693 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 00:14:15,708 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34811,DS-6921345a-75ee-43a8-af20-66801e0c34f1,DISK] 2023-07-21 00:14:15,710 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42719,DS-a3784960-4f9c-4c2b-82ba-c13e483fd9aa,DISK] 2023-07-21 00:14:15,710 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39689,DS-f7b638e4-1e18-4db1-a399-9895eb93a2ca,DISK] 2023-07-21 00:14:15,713 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-21 00:14:15,715 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 00:14:15,722 INFO [RS:3;jenkins-hbase4:43987] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/WALs/jenkins-hbase4.apache.org,43987,1689898455241/jenkins-hbase4.apache.org%2C43987%2C1689898455241.1689898455661 2023-07-21 00:14:15,722 DEBUG [RS:3;jenkins-hbase4:43987] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39689,DS-f7b638e4-1e18-4db1-a399-9895eb93a2ca,DISK], DatanodeInfoWithStorage[127.0.0.1:42719,DS-a3784960-4f9c-4c2b-82ba-c13e483fd9aa,DISK], DatanodeInfoWithStorage[127.0.0.1:34811,DS-6921345a-75ee-43a8-af20-66801e0c34f1,DISK]] 2023-07-21 00:14:15,728 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/88d17b148c0e8b134b44eb23bfcccd9a 2023-07-21 00:14:15,728 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7daf50517b66988d37d6b7bb2860ecd8 2023-07-21 00:14:15,729 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e16af981d1d2c55d6cb7b7d8530a2484 2023-07-21 00:14:15,729 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/88d17b148c0e8b134b44eb23bfcccd9a empty. 2023-07-21 00:14:15,729 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2eb8e61a805957035ab70435db5bb74a 2023-07-21 00:14:15,729 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7daf50517b66988d37d6b7bb2860ecd8 empty. 2023-07-21 00:14:15,730 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e16af981d1d2c55d6cb7b7d8530a2484 empty. 2023-07-21 00:14:15,730 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/88d17b148c0e8b134b44eb23bfcccd9a 2023-07-21 00:14:15,730 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8bc8759a139310371477de7f765de721 2023-07-21 00:14:15,731 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2eb8e61a805957035ab70435db5bb74a empty. 2023-07-21 00:14:15,731 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e16af981d1d2c55d6cb7b7d8530a2484 2023-07-21 00:14:15,731 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7daf50517b66988d37d6b7bb2860ecd8 2023-07-21 00:14:15,734 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8bc8759a139310371477de7f765de721 empty. 2023-07-21 00:14:15,735 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2eb8e61a805957035ab70435db5bb74a 2023-07-21 00:14:15,735 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8bc8759a139310371477de7f765de721 2023-07-21 00:14:15,735 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-21 00:14:15,787 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-21 00:14:15,791 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 7daf50517b66988d37d6b7bb2860ecd8, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689898455667.7daf50517b66988d37d6b7bb2860ecd8.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp 2023-07-21 00:14:15,799 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 88d17b148c0e8b134b44eb23bfcccd9a, NAME => 'Group_testTableMoveTruncateAndDrop,,1689898455667.88d17b148c0e8b134b44eb23bfcccd9a.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp 2023-07-21 00:14:15,807 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => e16af981d1d2c55d6cb7b7d8530a2484, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689898455667.e16af981d1d2c55d6cb7b7d8530a2484.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp 2023-07-21 00:14:15,827 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-21 00:14:15,873 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689898455667.e16af981d1d2c55d6cb7b7d8530a2484.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:15,874 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689898455667.7daf50517b66988d37d6b7bb2860ecd8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:15,874 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing e16af981d1d2c55d6cb7b7d8530a2484, disabling compactions & flushes 2023-07-21 00:14:15,875 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 7daf50517b66988d37d6b7bb2860ecd8, disabling compactions & flushes 2023-07-21 00:14:15,875 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689898455667.e16af981d1d2c55d6cb7b7d8530a2484. 2023-07-21 00:14:15,875 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689898455667.7daf50517b66988d37d6b7bb2860ecd8. 2023-07-21 00:14:15,875 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689898455667.e16af981d1d2c55d6cb7b7d8530a2484. 2023-07-21 00:14:15,876 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689898455667.7daf50517b66988d37d6b7bb2860ecd8. 2023-07-21 00:14:15,876 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689898455667.7daf50517b66988d37d6b7bb2860ecd8. after waiting 0 ms 2023-07-21 00:14:15,876 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689898455667.7daf50517b66988d37d6b7bb2860ecd8. 2023-07-21 00:14:15,876 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689898455667.7daf50517b66988d37d6b7bb2860ecd8. 2023-07-21 00:14:15,876 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 7daf50517b66988d37d6b7bb2860ecd8: 2023-07-21 00:14:15,877 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 2eb8e61a805957035ab70435db5bb74a, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689898455667.2eb8e61a805957035ab70435db5bb74a.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp 2023-07-21 00:14:15,876 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689898455667.e16af981d1d2c55d6cb7b7d8530a2484. after waiting 0 ms 2023-07-21 00:14:15,877 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689898455667.e16af981d1d2c55d6cb7b7d8530a2484. 2023-07-21 00:14:15,877 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689898455667.e16af981d1d2c55d6cb7b7d8530a2484. 2023-07-21 00:14:15,877 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for e16af981d1d2c55d6cb7b7d8530a2484: 2023-07-21 00:14:15,877 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689898455667.88d17b148c0e8b134b44eb23bfcccd9a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:15,878 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 88d17b148c0e8b134b44eb23bfcccd9a, disabling compactions & flushes 2023-07-21 00:14:15,878 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 8bc8759a139310371477de7f765de721, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689898455667.8bc8759a139310371477de7f765de721.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp 2023-07-21 00:14:15,878 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689898455667.88d17b148c0e8b134b44eb23bfcccd9a. 2023-07-21 00:14:15,878 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689898455667.88d17b148c0e8b134b44eb23bfcccd9a. 2023-07-21 00:14:15,878 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689898455667.88d17b148c0e8b134b44eb23bfcccd9a. after waiting 0 ms 2023-07-21 00:14:15,879 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689898455667.88d17b148c0e8b134b44eb23bfcccd9a. 2023-07-21 00:14:15,879 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689898455667.88d17b148c0e8b134b44eb23bfcccd9a. 2023-07-21 00:14:15,879 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 88d17b148c0e8b134b44eb23bfcccd9a: 2023-07-21 00:14:15,906 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689898455667.2eb8e61a805957035ab70435db5bb74a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:15,907 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 2eb8e61a805957035ab70435db5bb74a, disabling compactions & flushes 2023-07-21 00:14:15,908 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689898455667.2eb8e61a805957035ab70435db5bb74a. 2023-07-21 00:14:15,908 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689898455667.2eb8e61a805957035ab70435db5bb74a. 2023-07-21 00:14:15,908 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689898455667.2eb8e61a805957035ab70435db5bb74a. after waiting 0 ms 2023-07-21 00:14:15,908 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689898455667.2eb8e61a805957035ab70435db5bb74a. 2023-07-21 00:14:15,908 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689898455667.2eb8e61a805957035ab70435db5bb74a. 2023-07-21 00:14:15,908 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 2eb8e61a805957035ab70435db5bb74a: 2023-07-21 00:14:15,908 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689898455667.8bc8759a139310371477de7f765de721.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:15,908 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 8bc8759a139310371477de7f765de721, disabling compactions & flushes 2023-07-21 00:14:15,908 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689898455667.8bc8759a139310371477de7f765de721. 2023-07-21 00:14:15,908 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689898455667.8bc8759a139310371477de7f765de721. 2023-07-21 00:14:15,908 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689898455667.8bc8759a139310371477de7f765de721. after waiting 0 ms 2023-07-21 00:14:15,908 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689898455667.8bc8759a139310371477de7f765de721. 2023-07-21 00:14:15,908 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689898455667.8bc8759a139310371477de7f765de721. 2023-07-21 00:14:15,908 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 8bc8759a139310371477de7f765de721: 2023-07-21 00:14:15,912 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 00:14:15,914 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689898455667.7daf50517b66988d37d6b7bb2860ecd8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689898455914"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898455914"}]},"ts":"1689898455914"} 2023-07-21 00:14:15,914 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689898455667.e16af981d1d2c55d6cb7b7d8530a2484.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689898455914"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898455914"}]},"ts":"1689898455914"} 2023-07-21 00:14:15,915 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689898455667.88d17b148c0e8b134b44eb23bfcccd9a.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689898455914"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898455914"}]},"ts":"1689898455914"} 2023-07-21 00:14:15,915 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689898455667.2eb8e61a805957035ab70435db5bb74a.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689898455914"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898455914"}]},"ts":"1689898455914"} 2023-07-21 00:14:15,915 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689898455667.8bc8759a139310371477de7f765de721.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689898455914"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898455914"}]},"ts":"1689898455914"} 2023-07-21 00:14:15,969 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-21 00:14:15,971 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 00:14:15,971 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689898455971"}]},"ts":"1689898455971"} 2023-07-21 00:14:15,974 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-21 00:14:15,984 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 00:14:15,984 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 00:14:15,984 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 00:14:15,984 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 00:14:15,984 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=88d17b148c0e8b134b44eb23bfcccd9a, ASSIGN}, {pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7daf50517b66988d37d6b7bb2860ecd8, ASSIGN}, {pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e16af981d1d2c55d6cb7b7d8530a2484, ASSIGN}, {pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2eb8e61a805957035ab70435db5bb74a, ASSIGN}, {pid=17, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8bc8759a139310371477de7f765de721, ASSIGN}] 2023-07-21 00:14:15,988 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7daf50517b66988d37d6b7bb2860ecd8, ASSIGN 2023-07-21 00:14:15,988 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e16af981d1d2c55d6cb7b7d8530a2484, ASSIGN 2023-07-21 00:14:15,989 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=88d17b148c0e8b134b44eb23bfcccd9a, ASSIGN 2023-07-21 00:14:15,989 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2eb8e61a805957035ab70435db5bb74a, ASSIGN 2023-07-21 00:14:15,991 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7daf50517b66988d37d6b7bb2860ecd8, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46101,1689898451098; forceNewPlan=false, retain=false 2023-07-21 00:14:15,991 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2eb8e61a805957035ab70435db5bb74a, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43987,1689898455241; forceNewPlan=false, retain=false 2023-07-21 00:14:15,991 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=88d17b148c0e8b134b44eb23bfcccd9a, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46101,1689898451098; forceNewPlan=false, retain=false 2023-07-21 00:14:15,991 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e16af981d1d2c55d6cb7b7d8530a2484, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43987,1689898455241; forceNewPlan=false, retain=false 2023-07-21 00:14:15,991 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8bc8759a139310371477de7f765de721, ASSIGN 2023-07-21 00:14:15,992 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8bc8759a139310371477de7f765de721, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46101,1689898451098; forceNewPlan=false, retain=false 2023-07-21 00:14:16,030 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-21 00:14:16,141 INFO [jenkins-hbase4:33855] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-21 00:14:16,145 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=14 updating hbase:meta row=7daf50517b66988d37d6b7bb2860ecd8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46101,1689898451098 2023-07-21 00:14:16,145 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=e16af981d1d2c55d6cb7b7d8530a2484, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43987,1689898455241 2023-07-21 00:14:16,145 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=2eb8e61a805957035ab70435db5bb74a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43987,1689898455241 2023-07-21 00:14:16,145 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=88d17b148c0e8b134b44eb23bfcccd9a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46101,1689898451098 2023-07-21 00:14:16,145 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=8bc8759a139310371477de7f765de721, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46101,1689898451098 2023-07-21 00:14:16,145 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689898455667.2eb8e61a805957035ab70435db5bb74a.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689898456145"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898456145"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898456145"}]},"ts":"1689898456145"} 2023-07-21 00:14:16,145 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689898455667.7daf50517b66988d37d6b7bb2860ecd8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689898456145"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898456145"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898456145"}]},"ts":"1689898456145"} 2023-07-21 00:14:16,145 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689898455667.8bc8759a139310371477de7f765de721.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689898456145"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898456145"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898456145"}]},"ts":"1689898456145"} 2023-07-21 00:14:16,145 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689898455667.88d17b148c0e8b134b44eb23bfcccd9a.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689898456145"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898456145"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898456145"}]},"ts":"1689898456145"} 2023-07-21 00:14:16,145 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689898455667.e16af981d1d2c55d6cb7b7d8530a2484.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689898456145"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898456145"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898456145"}]},"ts":"1689898456145"} 2023-07-21 00:14:16,152 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=16, state=RUNNABLE; OpenRegionProcedure 2eb8e61a805957035ab70435db5bb74a, server=jenkins-hbase4.apache.org,43987,1689898455241}] 2023-07-21 00:14:16,156 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=19, ppid=14, state=RUNNABLE; OpenRegionProcedure 7daf50517b66988d37d6b7bb2860ecd8, server=jenkins-hbase4.apache.org,46101,1689898451098}] 2023-07-21 00:14:16,158 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=20, ppid=17, state=RUNNABLE; OpenRegionProcedure 8bc8759a139310371477de7f765de721, server=jenkins-hbase4.apache.org,46101,1689898451098}] 2023-07-21 00:14:16,160 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=13, state=RUNNABLE; OpenRegionProcedure 88d17b148c0e8b134b44eb23bfcccd9a, server=jenkins-hbase4.apache.org,46101,1689898451098}] 2023-07-21 00:14:16,161 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=15, state=RUNNABLE; OpenRegionProcedure e16af981d1d2c55d6cb7b7d8530a2484, server=jenkins-hbase4.apache.org,43987,1689898455241}] 2023-07-21 00:14:16,309 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,43987,1689898455241 2023-07-21 00:14:16,309 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 00:14:16,313 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45886, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 00:14:16,318 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689898455667.88d17b148c0e8b134b44eb23bfcccd9a. 2023-07-21 00:14:16,319 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689898455667.e16af981d1d2c55d6cb7b7d8530a2484. 2023-07-21 00:14:16,319 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 88d17b148c0e8b134b44eb23bfcccd9a, NAME => 'Group_testTableMoveTruncateAndDrop,,1689898455667.88d17b148c0e8b134b44eb23bfcccd9a.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-21 00:14:16,319 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e16af981d1d2c55d6cb7b7d8530a2484, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689898455667.e16af981d1d2c55d6cb7b7d8530a2484.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-21 00:14:16,319 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 88d17b148c0e8b134b44eb23bfcccd9a 2023-07-21 00:14:16,319 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop e16af981d1d2c55d6cb7b7d8530a2484 2023-07-21 00:14:16,319 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689898455667.88d17b148c0e8b134b44eb23bfcccd9a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:16,319 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689898455667.e16af981d1d2c55d6cb7b7d8530a2484.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:16,319 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 88d17b148c0e8b134b44eb23bfcccd9a 2023-07-21 00:14:16,319 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e16af981d1d2c55d6cb7b7d8530a2484 2023-07-21 00:14:16,320 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e16af981d1d2c55d6cb7b7d8530a2484 2023-07-21 00:14:16,320 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 88d17b148c0e8b134b44eb23bfcccd9a 2023-07-21 00:14:16,323 INFO [StoreOpener-e16af981d1d2c55d6cb7b7d8530a2484-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region e16af981d1d2c55d6cb7b7d8530a2484 2023-07-21 00:14:16,324 INFO [StoreOpener-88d17b148c0e8b134b44eb23bfcccd9a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 88d17b148c0e8b134b44eb23bfcccd9a 2023-07-21 00:14:16,326 DEBUG [StoreOpener-e16af981d1d2c55d6cb7b7d8530a2484-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/e16af981d1d2c55d6cb7b7d8530a2484/f 2023-07-21 00:14:16,326 DEBUG [StoreOpener-e16af981d1d2c55d6cb7b7d8530a2484-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/e16af981d1d2c55d6cb7b7d8530a2484/f 2023-07-21 00:14:16,326 INFO [StoreOpener-e16af981d1d2c55d6cb7b7d8530a2484-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e16af981d1d2c55d6cb7b7d8530a2484 columnFamilyName f 2023-07-21 00:14:16,326 DEBUG [StoreOpener-88d17b148c0e8b134b44eb23bfcccd9a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/88d17b148c0e8b134b44eb23bfcccd9a/f 2023-07-21 00:14:16,326 DEBUG [StoreOpener-88d17b148c0e8b134b44eb23bfcccd9a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/88d17b148c0e8b134b44eb23bfcccd9a/f 2023-07-21 00:14:16,327 INFO [StoreOpener-88d17b148c0e8b134b44eb23bfcccd9a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 88d17b148c0e8b134b44eb23bfcccd9a columnFamilyName f 2023-07-21 00:14:16,327 INFO [StoreOpener-e16af981d1d2c55d6cb7b7d8530a2484-1] regionserver.HStore(310): Store=e16af981d1d2c55d6cb7b7d8530a2484/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:16,331 INFO [StoreOpener-88d17b148c0e8b134b44eb23bfcccd9a-1] regionserver.HStore(310): Store=88d17b148c0e8b134b44eb23bfcccd9a/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:16,333 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-21 00:14:16,333 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/88d17b148c0e8b134b44eb23bfcccd9a 2023-07-21 00:14:16,333 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/e16af981d1d2c55d6cb7b7d8530a2484 2023-07-21 00:14:16,334 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/e16af981d1d2c55d6cb7b7d8530a2484 2023-07-21 00:14:16,334 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/88d17b148c0e8b134b44eb23bfcccd9a 2023-07-21 00:14:16,339 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e16af981d1d2c55d6cb7b7d8530a2484 2023-07-21 00:14:16,343 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 88d17b148c0e8b134b44eb23bfcccd9a 2023-07-21 00:14:16,343 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/e16af981d1d2c55d6cb7b7d8530a2484/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 00:14:16,344 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e16af981d1d2c55d6cb7b7d8530a2484; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10229280480, jitterRate=-0.04732401669025421}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 00:14:16,344 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e16af981d1d2c55d6cb7b7d8530a2484: 2023-07-21 00:14:16,346 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689898455667.e16af981d1d2c55d6cb7b7d8530a2484., pid=22, masterSystemTime=1689898456309 2023-07-21 00:14:16,349 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/88d17b148c0e8b134b44eb23bfcccd9a/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 00:14:16,351 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689898455667.e16af981d1d2c55d6cb7b7d8530a2484. 2023-07-21 00:14:16,352 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689898455667.e16af981d1d2c55d6cb7b7d8530a2484. 2023-07-21 00:14:16,353 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689898455667.2eb8e61a805957035ab70435db5bb74a. 2023-07-21 00:14:16,353 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=e16af981d1d2c55d6cb7b7d8530a2484, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43987,1689898455241 2023-07-21 00:14:16,353 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2eb8e61a805957035ab70435db5bb74a, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689898455667.2eb8e61a805957035ab70435db5bb74a.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-21 00:14:16,353 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 88d17b148c0e8b134b44eb23bfcccd9a; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9504266240, jitterRate=-0.11484622955322266}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 00:14:16,353 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689898455667.e16af981d1d2c55d6cb7b7d8530a2484.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689898456353"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689898456353"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689898456353"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689898456353"}]},"ts":"1689898456353"} 2023-07-21 00:14:16,354 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 88d17b148c0e8b134b44eb23bfcccd9a: 2023-07-21 00:14:16,354 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 2eb8e61a805957035ab70435db5bb74a 2023-07-21 00:14:16,358 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689898455667.2eb8e61a805957035ab70435db5bb74a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:16,358 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2eb8e61a805957035ab70435db5bb74a 2023-07-21 00:14:16,358 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2eb8e61a805957035ab70435db5bb74a 2023-07-21 00:14:16,363 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689898455667.88d17b148c0e8b134b44eb23bfcccd9a., pid=21, masterSystemTime=1689898456310 2023-07-21 00:14:16,364 INFO [StoreOpener-2eb8e61a805957035ab70435db5bb74a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 2eb8e61a805957035ab70435db5bb74a 2023-07-21 00:14:16,367 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689898455667.88d17b148c0e8b134b44eb23bfcccd9a. 2023-07-21 00:14:16,367 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689898455667.88d17b148c0e8b134b44eb23bfcccd9a. 2023-07-21 00:14:16,367 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689898455667.8bc8759a139310371477de7f765de721. 2023-07-21 00:14:16,368 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8bc8759a139310371477de7f765de721, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689898455667.8bc8759a139310371477de7f765de721.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-21 00:14:16,368 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 8bc8759a139310371477de7f765de721 2023-07-21 00:14:16,368 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689898455667.8bc8759a139310371477de7f765de721.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:16,368 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8bc8759a139310371477de7f765de721 2023-07-21 00:14:16,368 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8bc8759a139310371477de7f765de721 2023-07-21 00:14:16,369 DEBUG [StoreOpener-2eb8e61a805957035ab70435db5bb74a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/2eb8e61a805957035ab70435db5bb74a/f 2023-07-21 00:14:16,369 DEBUG [StoreOpener-2eb8e61a805957035ab70435db5bb74a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/2eb8e61a805957035ab70435db5bb74a/f 2023-07-21 00:14:16,369 INFO [StoreOpener-2eb8e61a805957035ab70435db5bb74a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2eb8e61a805957035ab70435db5bb74a columnFamilyName f 2023-07-21 00:14:16,370 INFO [StoreOpener-8bc8759a139310371477de7f765de721-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 8bc8759a139310371477de7f765de721 2023-07-21 00:14:16,370 INFO [StoreOpener-2eb8e61a805957035ab70435db5bb74a-1] regionserver.HStore(310): Store=2eb8e61a805957035ab70435db5bb74a/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:16,372 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=88d17b148c0e8b134b44eb23bfcccd9a, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46101,1689898451098 2023-07-21 00:14:16,373 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689898455667.88d17b148c0e8b134b44eb23bfcccd9a.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689898456372"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689898456372"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689898456372"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689898456372"}]},"ts":"1689898456372"} 2023-07-21 00:14:16,373 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/2eb8e61a805957035ab70435db5bb74a 2023-07-21 00:14:16,374 DEBUG [StoreOpener-8bc8759a139310371477de7f765de721-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/8bc8759a139310371477de7f765de721/f 2023-07-21 00:14:16,374 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/2eb8e61a805957035ab70435db5bb74a 2023-07-21 00:14:16,375 DEBUG [StoreOpener-8bc8759a139310371477de7f765de721-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/8bc8759a139310371477de7f765de721/f 2023-07-21 00:14:16,380 INFO [StoreOpener-8bc8759a139310371477de7f765de721-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8bc8759a139310371477de7f765de721 columnFamilyName f 2023-07-21 00:14:16,380 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=15 2023-07-21 00:14:16,380 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=15, state=SUCCESS; OpenRegionProcedure e16af981d1d2c55d6cb7b7d8530a2484, server=jenkins-hbase4.apache.org,43987,1689898455241 in 206 msec 2023-07-21 00:14:16,381 INFO [StoreOpener-8bc8759a139310371477de7f765de721-1] regionserver.HStore(310): Store=8bc8759a139310371477de7f765de721/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:16,383 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/8bc8759a139310371477de7f765de721 2023-07-21 00:14:16,383 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/8bc8759a139310371477de7f765de721 2023-07-21 00:14:16,384 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2eb8e61a805957035ab70435db5bb74a 2023-07-21 00:14:16,385 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e16af981d1d2c55d6cb7b7d8530a2484, ASSIGN in 396 msec 2023-07-21 00:14:16,386 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=13 2023-07-21 00:14:16,386 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=13, state=SUCCESS; OpenRegionProcedure 88d17b148c0e8b134b44eb23bfcccd9a, server=jenkins-hbase4.apache.org,46101,1689898451098 in 220 msec 2023-07-21 00:14:16,388 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/2eb8e61a805957035ab70435db5bb74a/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 00:14:16,389 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=88d17b148c0e8b134b44eb23bfcccd9a, ASSIGN in 402 msec 2023-07-21 00:14:16,389 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2eb8e61a805957035ab70435db5bb74a; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10978873280, jitterRate=0.022487252950668335}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 00:14:16,389 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8bc8759a139310371477de7f765de721 2023-07-21 00:14:16,389 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2eb8e61a805957035ab70435db5bb74a: 2023-07-21 00:14:16,390 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689898455667.2eb8e61a805957035ab70435db5bb74a., pid=18, masterSystemTime=1689898456309 2023-07-21 00:14:16,392 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689898455667.2eb8e61a805957035ab70435db5bb74a. 2023-07-21 00:14:16,392 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/8bc8759a139310371477de7f765de721/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 00:14:16,392 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689898455667.2eb8e61a805957035ab70435db5bb74a. 2023-07-21 00:14:16,393 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=2eb8e61a805957035ab70435db5bb74a, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43987,1689898455241 2023-07-21 00:14:16,393 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8bc8759a139310371477de7f765de721; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11220364480, jitterRate=0.04497787356376648}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 00:14:16,393 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8bc8759a139310371477de7f765de721: 2023-07-21 00:14:16,393 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689898455667.2eb8e61a805957035ab70435db5bb74a.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689898456393"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689898456393"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689898456393"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689898456393"}]},"ts":"1689898456393"} 2023-07-21 00:14:16,394 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689898455667.8bc8759a139310371477de7f765de721., pid=20, masterSystemTime=1689898456310 2023-07-21 00:14:16,396 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689898455667.8bc8759a139310371477de7f765de721. 2023-07-21 00:14:16,396 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689898455667.8bc8759a139310371477de7f765de721. 2023-07-21 00:14:16,396 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689898455667.7daf50517b66988d37d6b7bb2860ecd8. 2023-07-21 00:14:16,396 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7daf50517b66988d37d6b7bb2860ecd8, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689898455667.7daf50517b66988d37d6b7bb2860ecd8.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-21 00:14:16,397 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 7daf50517b66988d37d6b7bb2860ecd8 2023-07-21 00:14:16,397 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689898455667.7daf50517b66988d37d6b7bb2860ecd8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:16,397 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7daf50517b66988d37d6b7bb2860ecd8 2023-07-21 00:14:16,397 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7daf50517b66988d37d6b7bb2860ecd8 2023-07-21 00:14:16,397 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=8bc8759a139310371477de7f765de721, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46101,1689898451098 2023-07-21 00:14:16,398 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689898455667.8bc8759a139310371477de7f765de721.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689898456397"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689898456397"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689898456397"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689898456397"}]},"ts":"1689898456397"} 2023-07-21 00:14:16,400 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=16 2023-07-21 00:14:16,400 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=16, state=SUCCESS; OpenRegionProcedure 2eb8e61a805957035ab70435db5bb74a, server=jenkins-hbase4.apache.org,43987,1689898455241 in 244 msec 2023-07-21 00:14:16,402 INFO [StoreOpener-7daf50517b66988d37d6b7bb2860ecd8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 7daf50517b66988d37d6b7bb2860ecd8 2023-07-21 00:14:16,404 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2eb8e61a805957035ab70435db5bb74a, ASSIGN in 416 msec 2023-07-21 00:14:16,406 DEBUG [StoreOpener-7daf50517b66988d37d6b7bb2860ecd8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/7daf50517b66988d37d6b7bb2860ecd8/f 2023-07-21 00:14:16,406 DEBUG [StoreOpener-7daf50517b66988d37d6b7bb2860ecd8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/7daf50517b66988d37d6b7bb2860ecd8/f 2023-07-21 00:14:16,407 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=20, resume processing ppid=17 2023-07-21 00:14:16,407 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=20, ppid=17, state=SUCCESS; OpenRegionProcedure 8bc8759a139310371477de7f765de721, server=jenkins-hbase4.apache.org,46101,1689898451098 in 243 msec 2023-07-21 00:14:16,407 INFO [StoreOpener-7daf50517b66988d37d6b7bb2860ecd8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7daf50517b66988d37d6b7bb2860ecd8 columnFamilyName f 2023-07-21 00:14:16,408 INFO [StoreOpener-7daf50517b66988d37d6b7bb2860ecd8-1] regionserver.HStore(310): Store=7daf50517b66988d37d6b7bb2860ecd8/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:16,409 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/7daf50517b66988d37d6b7bb2860ecd8 2023-07-21 00:14:16,411 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/7daf50517b66988d37d6b7bb2860ecd8 2023-07-21 00:14:16,412 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8bc8759a139310371477de7f765de721, ASSIGN in 423 msec 2023-07-21 00:14:16,415 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7daf50517b66988d37d6b7bb2860ecd8 2023-07-21 00:14:16,419 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/7daf50517b66988d37d6b7bb2860ecd8/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 00:14:16,419 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7daf50517b66988d37d6b7bb2860ecd8; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10093216480, jitterRate=-0.05999596416950226}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 00:14:16,420 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7daf50517b66988d37d6b7bb2860ecd8: 2023-07-21 00:14:16,421 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689898455667.7daf50517b66988d37d6b7bb2860ecd8., pid=19, masterSystemTime=1689898456310 2023-07-21 00:14:16,423 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689898455667.7daf50517b66988d37d6b7bb2860ecd8. 2023-07-21 00:14:16,423 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689898455667.7daf50517b66988d37d6b7bb2860ecd8. 2023-07-21 00:14:16,424 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=14 updating hbase:meta row=7daf50517b66988d37d6b7bb2860ecd8, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46101,1689898451098 2023-07-21 00:14:16,424 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689898455667.7daf50517b66988d37d6b7bb2860ecd8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689898456424"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689898456424"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689898456424"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689898456424"}]},"ts":"1689898456424"} 2023-07-21 00:14:16,429 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=19, resume processing ppid=14 2023-07-21 00:14:16,429 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=14, state=SUCCESS; OpenRegionProcedure 7daf50517b66988d37d6b7bb2860ecd8, server=jenkins-hbase4.apache.org,46101,1689898451098 in 270 msec 2023-07-21 00:14:16,432 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=12 2023-07-21 00:14:16,433 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7daf50517b66988d37d6b7bb2860ecd8, ASSIGN in 445 msec 2023-07-21 00:14:16,434 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 00:14:16,434 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689898456434"}]},"ts":"1689898456434"} 2023-07-21 00:14:16,436 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-21 00:14:16,439 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 00:14:16,442 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop in 765 msec 2023-07-21 00:14:16,835 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-21 00:14:16,836 INFO [Listener at localhost/41495] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 12 completed 2023-07-21 00:14:16,836 DEBUG [Listener at localhost/41495] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testTableMoveTruncateAndDrop get assigned. Timeout = 60000ms 2023-07-21 00:14:16,837 INFO [Listener at localhost/41495] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:16,844 INFO [Listener at localhost/41495] hbase.HBaseTestingUtility(3484): All regions for table Group_testTableMoveTruncateAndDrop assigned to meta. Checking AM states. 2023-07-21 00:14:16,845 INFO [Listener at localhost/41495] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:16,845 INFO [Listener at localhost/41495] hbase.HBaseTestingUtility(3504): All regions for table Group_testTableMoveTruncateAndDrop assigned. 2023-07-21 00:14:16,846 INFO [Listener at localhost/41495] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:16,853 DEBUG [Listener at localhost/41495] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 00:14:16,858 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53250, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 00:14:16,862 DEBUG [Listener at localhost/41495] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 00:14:16,867 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41166, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 00:14:16,868 DEBUG [Listener at localhost/41495] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 00:14:16,873 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45888, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 00:14:16,875 DEBUG [Listener at localhost/41495] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 00:14:16,895 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55766, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 00:14:16,911 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-21 00:14:16,912 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 00:14:16,913 INFO [Listener at localhost/41495] rsgroup.TestRSGroupsAdmin1(307): Moving table Group_testTableMoveTruncateAndDrop to Group_testTableMoveTruncateAndDrop_761444744 2023-07-21 00:14:16,925 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testTableMoveTruncateAndDrop] to rsgroup Group_testTableMoveTruncateAndDrop_761444744 2023-07-21 00:14:16,931 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_761444744 2023-07-21 00:14:16,931 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:16,932 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:16,932 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 00:14:16,939 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testTableMoveTruncateAndDrop to RSGroup Group_testTableMoveTruncateAndDrop_761444744 2023-07-21 00:14:16,939 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(345): Moving region 88d17b148c0e8b134b44eb23bfcccd9a to RSGroup Group_testTableMoveTruncateAndDrop_761444744 2023-07-21 00:14:16,940 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 00:14:16,940 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 00:14:16,940 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 00:14:16,941 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 00:14:16,941 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 00:14:16,944 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=88d17b148c0e8b134b44eb23bfcccd9a, REOPEN/MOVE 2023-07-21 00:14:16,945 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(345): Moving region 7daf50517b66988d37d6b7bb2860ecd8 to RSGroup Group_testTableMoveTruncateAndDrop_761444744 2023-07-21 00:14:16,945 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 00:14:16,945 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 00:14:16,945 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 00:14:16,945 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 00:14:16,945 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 00:14:16,948 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=23, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=88d17b148c0e8b134b44eb23bfcccd9a, REOPEN/MOVE 2023-07-21 00:14:16,950 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7daf50517b66988d37d6b7bb2860ecd8, REOPEN/MOVE 2023-07-21 00:14:16,951 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(345): Moving region e16af981d1d2c55d6cb7b7d8530a2484 to RSGroup Group_testTableMoveTruncateAndDrop_761444744 2023-07-21 00:14:16,952 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=24, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7daf50517b66988d37d6b7bb2860ecd8, REOPEN/MOVE 2023-07-21 00:14:16,951 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=88d17b148c0e8b134b44eb23bfcccd9a, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46101,1689898451098 2023-07-21 00:14:16,952 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 00:14:16,952 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 00:14:16,952 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689898455667.88d17b148c0e8b134b44eb23bfcccd9a.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689898456951"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898456951"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898456951"}]},"ts":"1689898456951"} 2023-07-21 00:14:16,952 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 00:14:16,952 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 00:14:16,952 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 00:14:16,953 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=7daf50517b66988d37d6b7bb2860ecd8, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46101,1689898451098 2023-07-21 00:14:16,953 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689898455667.7daf50517b66988d37d6b7bb2860ecd8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689898456953"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898456953"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898456953"}]},"ts":"1689898456953"} 2023-07-21 00:14:16,955 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] procedure2.ProcedureExecutor(1029): Stored pid=25, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e16af981d1d2c55d6cb7b7d8530a2484, REOPEN/MOVE 2023-07-21 00:14:16,955 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(345): Moving region 2eb8e61a805957035ab70435db5bb74a to RSGroup Group_testTableMoveTruncateAndDrop_761444744 2023-07-21 00:14:16,956 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=25, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e16af981d1d2c55d6cb7b7d8530a2484, REOPEN/MOVE 2023-07-21 00:14:16,956 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 00:14:16,957 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 00:14:16,957 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 00:14:16,957 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 00:14:16,957 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 00:14:16,957 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=26, ppid=23, state=RUNNABLE; CloseRegionProcedure 88d17b148c0e8b134b44eb23bfcccd9a, server=jenkins-hbase4.apache.org,46101,1689898451098}] 2023-07-21 00:14:16,963 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=28, ppid=24, state=RUNNABLE; CloseRegionProcedure 7daf50517b66988d37d6b7bb2860ecd8, server=jenkins-hbase4.apache.org,46101,1689898451098}] 2023-07-21 00:14:16,965 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=e16af981d1d2c55d6cb7b7d8530a2484, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43987,1689898455241 2023-07-21 00:14:16,965 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689898455667.e16af981d1d2c55d6cb7b7d8530a2484.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689898456965"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898456965"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898456965"}]},"ts":"1689898456965"} 2023-07-21 00:14:16,970 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=29, ppid=25, state=RUNNABLE; CloseRegionProcedure e16af981d1d2c55d6cb7b7d8530a2484, server=jenkins-hbase4.apache.org,43987,1689898455241}] 2023-07-21 00:14:16,972 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] procedure2.ProcedureExecutor(1029): Stored pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2eb8e61a805957035ab70435db5bb74a, REOPEN/MOVE 2023-07-21 00:14:16,973 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(345): Moving region 8bc8759a139310371477de7f765de721 to RSGroup Group_testTableMoveTruncateAndDrop_761444744 2023-07-21 00:14:16,973 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 00:14:16,973 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 00:14:16,973 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 00:14:16,973 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 00:14:16,973 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 00:14:16,976 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2eb8e61a805957035ab70435db5bb74a, REOPEN/MOVE 2023-07-21 00:14:16,977 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] procedure2.ProcedureExecutor(1029): Stored pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8bc8759a139310371477de7f765de721, REOPEN/MOVE 2023-07-21 00:14:16,977 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(286): Moving 5 region(s) to group Group_testTableMoveTruncateAndDrop_761444744, current retry=0 2023-07-21 00:14:16,978 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8bc8759a139310371477de7f765de721, REOPEN/MOVE 2023-07-21 00:14:16,980 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=2eb8e61a805957035ab70435db5bb74a, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43987,1689898455241 2023-07-21 00:14:16,980 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689898455667.2eb8e61a805957035ab70435db5bb74a.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689898456980"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898456980"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898456980"}]},"ts":"1689898456980"} 2023-07-21 00:14:16,981 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=8bc8759a139310371477de7f765de721, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46101,1689898451098 2023-07-21 00:14:16,981 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689898455667.8bc8759a139310371477de7f765de721.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689898456981"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898456981"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898456981"}]},"ts":"1689898456981"} 2023-07-21 00:14:16,983 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=31, ppid=27, state=RUNNABLE; CloseRegionProcedure 2eb8e61a805957035ab70435db5bb74a, server=jenkins-hbase4.apache.org,43987,1689898455241}] 2023-07-21 00:14:16,985 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=32, ppid=30, state=RUNNABLE; CloseRegionProcedure 8bc8759a139310371477de7f765de721, server=jenkins-hbase4.apache.org,46101,1689898451098}] 2023-07-21 00:14:17,129 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 7daf50517b66988d37d6b7bb2860ecd8 2023-07-21 00:14:17,130 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e16af981d1d2c55d6cb7b7d8530a2484 2023-07-21 00:14:17,133 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7daf50517b66988d37d6b7bb2860ecd8, disabling compactions & flushes 2023-07-21 00:14:17,134 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689898455667.7daf50517b66988d37d6b7bb2860ecd8. 2023-07-21 00:14:17,134 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689898455667.7daf50517b66988d37d6b7bb2860ecd8. 2023-07-21 00:14:17,134 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689898455667.7daf50517b66988d37d6b7bb2860ecd8. after waiting 0 ms 2023-07-21 00:14:17,134 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689898455667.7daf50517b66988d37d6b7bb2860ecd8. 2023-07-21 00:14:17,136 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e16af981d1d2c55d6cb7b7d8530a2484, disabling compactions & flushes 2023-07-21 00:14:17,136 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689898455667.e16af981d1d2c55d6cb7b7d8530a2484. 2023-07-21 00:14:17,136 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689898455667.e16af981d1d2c55d6cb7b7d8530a2484. 2023-07-21 00:14:17,136 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689898455667.e16af981d1d2c55d6cb7b7d8530a2484. after waiting 0 ms 2023-07-21 00:14:17,136 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689898455667.e16af981d1d2c55d6cb7b7d8530a2484. 2023-07-21 00:14:17,152 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/e16af981d1d2c55d6cb7b7d8530a2484/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 00:14:17,152 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/7daf50517b66988d37d6b7bb2860ecd8/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 00:14:17,153 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689898455667.e16af981d1d2c55d6cb7b7d8530a2484. 2023-07-21 00:14:17,153 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e16af981d1d2c55d6cb7b7d8530a2484: 2023-07-21 00:14:17,153 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding e16af981d1d2c55d6cb7b7d8530a2484 move to jenkins-hbase4.apache.org,33545,1689898450890 record at close sequenceid=2 2023-07-21 00:14:17,154 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689898455667.7daf50517b66988d37d6b7bb2860ecd8. 2023-07-21 00:14:17,154 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7daf50517b66988d37d6b7bb2860ecd8: 2023-07-21 00:14:17,154 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 7daf50517b66988d37d6b7bb2860ecd8 move to jenkins-hbase4.apache.org,42163,1689898450682 record at close sequenceid=2 2023-07-21 00:14:17,157 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e16af981d1d2c55d6cb7b7d8530a2484 2023-07-21 00:14:17,157 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 2eb8e61a805957035ab70435db5bb74a 2023-07-21 00:14:17,158 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2eb8e61a805957035ab70435db5bb74a, disabling compactions & flushes 2023-07-21 00:14:17,158 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689898455667.2eb8e61a805957035ab70435db5bb74a. 2023-07-21 00:14:17,159 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689898455667.2eb8e61a805957035ab70435db5bb74a. 2023-07-21 00:14:17,159 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689898455667.2eb8e61a805957035ab70435db5bb74a. after waiting 0 ms 2023-07-21 00:14:17,159 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689898455667.2eb8e61a805957035ab70435db5bb74a. 2023-07-21 00:14:17,162 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=e16af981d1d2c55d6cb7b7d8530a2484, regionState=CLOSED 2023-07-21 00:14:17,162 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689898455667.e16af981d1d2c55d6cb7b7d8530a2484.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689898457162"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898457162"}]},"ts":"1689898457162"} 2023-07-21 00:14:17,162 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 7daf50517b66988d37d6b7bb2860ecd8 2023-07-21 00:14:17,163 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 8bc8759a139310371477de7f765de721 2023-07-21 00:14:17,164 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8bc8759a139310371477de7f765de721, disabling compactions & flushes 2023-07-21 00:14:17,164 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689898455667.8bc8759a139310371477de7f765de721. 2023-07-21 00:14:17,164 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689898455667.8bc8759a139310371477de7f765de721. 2023-07-21 00:14:17,164 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689898455667.8bc8759a139310371477de7f765de721. after waiting 0 ms 2023-07-21 00:14:17,164 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689898455667.8bc8759a139310371477de7f765de721. 2023-07-21 00:14:17,166 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=7daf50517b66988d37d6b7bb2860ecd8, regionState=CLOSED 2023-07-21 00:14:17,166 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689898455667.7daf50517b66988d37d6b7bb2860ecd8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689898457166"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898457166"}]},"ts":"1689898457166"} 2023-07-21 00:14:17,176 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=29, resume processing ppid=25 2023-07-21 00:14:17,176 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=29, ppid=25, state=SUCCESS; CloseRegionProcedure e16af981d1d2c55d6cb7b7d8530a2484, server=jenkins-hbase4.apache.org,43987,1689898455241 in 197 msec 2023-07-21 00:14:17,177 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=28, resume processing ppid=24 2023-07-21 00:14:17,177 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=28, ppid=24, state=SUCCESS; CloseRegionProcedure 7daf50517b66988d37d6b7bb2860ecd8, server=jenkins-hbase4.apache.org,46101,1689898451098 in 206 msec 2023-07-21 00:14:17,178 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=25, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e16af981d1d2c55d6cb7b7d8530a2484, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33545,1689898450890; forceNewPlan=false, retain=false 2023-07-21 00:14:17,179 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7daf50517b66988d37d6b7bb2860ecd8, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,42163,1689898450682; forceNewPlan=false, retain=false 2023-07-21 00:14:17,193 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/2eb8e61a805957035ab70435db5bb74a/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 00:14:17,195 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689898455667.2eb8e61a805957035ab70435db5bb74a. 2023-07-21 00:14:17,195 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2eb8e61a805957035ab70435db5bb74a: 2023-07-21 00:14:17,195 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 2eb8e61a805957035ab70435db5bb74a move to jenkins-hbase4.apache.org,33545,1689898450890 record at close sequenceid=2 2023-07-21 00:14:17,202 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 2eb8e61a805957035ab70435db5bb74a 2023-07-21 00:14:17,202 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=2eb8e61a805957035ab70435db5bb74a, regionState=CLOSED 2023-07-21 00:14:17,202 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/8bc8759a139310371477de7f765de721/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 00:14:17,203 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689898455667.2eb8e61a805957035ab70435db5bb74a.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689898457202"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898457202"}]},"ts":"1689898457202"} 2023-07-21 00:14:17,206 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689898455667.8bc8759a139310371477de7f765de721. 2023-07-21 00:14:17,207 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8bc8759a139310371477de7f765de721: 2023-07-21 00:14:17,207 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 8bc8759a139310371477de7f765de721 move to jenkins-hbase4.apache.org,33545,1689898450890 record at close sequenceid=2 2023-07-21 00:14:17,210 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 8bc8759a139310371477de7f765de721 2023-07-21 00:14:17,210 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 88d17b148c0e8b134b44eb23bfcccd9a 2023-07-21 00:14:17,211 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 88d17b148c0e8b134b44eb23bfcccd9a, disabling compactions & flushes 2023-07-21 00:14:17,212 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689898455667.88d17b148c0e8b134b44eb23bfcccd9a. 2023-07-21 00:14:17,212 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689898455667.88d17b148c0e8b134b44eb23bfcccd9a. 2023-07-21 00:14:17,212 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689898455667.88d17b148c0e8b134b44eb23bfcccd9a. after waiting 0 ms 2023-07-21 00:14:17,212 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689898455667.88d17b148c0e8b134b44eb23bfcccd9a. 2023-07-21 00:14:17,213 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=31, resume processing ppid=27 2023-07-21 00:14:17,213 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=31, ppid=27, state=SUCCESS; CloseRegionProcedure 2eb8e61a805957035ab70435db5bb74a, server=jenkins-hbase4.apache.org,43987,1689898455241 in 223 msec 2023-07-21 00:14:17,213 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=8bc8759a139310371477de7f765de721, regionState=CLOSED 2023-07-21 00:14:17,213 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689898455667.8bc8759a139310371477de7f765de721.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689898457213"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898457213"}]},"ts":"1689898457213"} 2023-07-21 00:14:17,216 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2eb8e61a805957035ab70435db5bb74a, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33545,1689898450890; forceNewPlan=false, retain=false 2023-07-21 00:14:17,221 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=32, resume processing ppid=30 2023-07-21 00:14:17,221 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=32, ppid=30, state=SUCCESS; CloseRegionProcedure 8bc8759a139310371477de7f765de721, server=jenkins-hbase4.apache.org,46101,1689898451098 in 232 msec 2023-07-21 00:14:17,222 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8bc8759a139310371477de7f765de721, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33545,1689898450890; forceNewPlan=false, retain=false 2023-07-21 00:14:17,243 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/88d17b148c0e8b134b44eb23bfcccd9a/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 00:14:17,245 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689898455667.88d17b148c0e8b134b44eb23bfcccd9a. 2023-07-21 00:14:17,245 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 88d17b148c0e8b134b44eb23bfcccd9a: 2023-07-21 00:14:17,245 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 88d17b148c0e8b134b44eb23bfcccd9a move to jenkins-hbase4.apache.org,42163,1689898450682 record at close sequenceid=2 2023-07-21 00:14:17,247 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 88d17b148c0e8b134b44eb23bfcccd9a 2023-07-21 00:14:17,249 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=88d17b148c0e8b134b44eb23bfcccd9a, regionState=CLOSED 2023-07-21 00:14:17,249 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689898455667.88d17b148c0e8b134b44eb23bfcccd9a.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689898457249"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898457249"}]},"ts":"1689898457249"} 2023-07-21 00:14:17,255 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=26, resume processing ppid=23 2023-07-21 00:14:17,255 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=26, ppid=23, state=SUCCESS; CloseRegionProcedure 88d17b148c0e8b134b44eb23bfcccd9a, server=jenkins-hbase4.apache.org,46101,1689898451098 in 294 msec 2023-07-21 00:14:17,256 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=23, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=88d17b148c0e8b134b44eb23bfcccd9a, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,42163,1689898450682; forceNewPlan=false, retain=false 2023-07-21 00:14:17,330 INFO [jenkins-hbase4:33855] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-21 00:14:17,332 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=8bc8759a139310371477de7f765de721, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33545,1689898450890 2023-07-21 00:14:17,332 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=2eb8e61a805957035ab70435db5bb74a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33545,1689898450890 2023-07-21 00:14:17,332 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=7daf50517b66988d37d6b7bb2860ecd8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42163,1689898450682 2023-07-21 00:14:17,333 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689898455667.8bc8759a139310371477de7f765de721.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689898457332"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898457332"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898457332"}]},"ts":"1689898457332"} 2023-07-21 00:14:17,332 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=88d17b148c0e8b134b44eb23bfcccd9a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42163,1689898450682 2023-07-21 00:14:17,333 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689898455667.7daf50517b66988d37d6b7bb2860ecd8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689898457332"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898457332"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898457332"}]},"ts":"1689898457332"} 2023-07-21 00:14:17,333 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689898455667.2eb8e61a805957035ab70435db5bb74a.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689898457332"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898457332"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898457332"}]},"ts":"1689898457332"} 2023-07-21 00:14:17,333 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689898455667.88d17b148c0e8b134b44eb23bfcccd9a.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689898457332"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898457332"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898457332"}]},"ts":"1689898457332"} 2023-07-21 00:14:17,336 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=33, ppid=30, state=RUNNABLE; OpenRegionProcedure 8bc8759a139310371477de7f765de721, server=jenkins-hbase4.apache.org,33545,1689898450890}] 2023-07-21 00:14:17,338 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=e16af981d1d2c55d6cb7b7d8530a2484, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33545,1689898450890 2023-07-21 00:14:17,338 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689898455667.e16af981d1d2c55d6cb7b7d8530a2484.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689898457338"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898457338"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898457338"}]},"ts":"1689898457338"} 2023-07-21 00:14:17,339 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=34, ppid=24, state=RUNNABLE; OpenRegionProcedure 7daf50517b66988d37d6b7bb2860ecd8, server=jenkins-hbase4.apache.org,42163,1689898450682}] 2023-07-21 00:14:17,341 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=35, ppid=27, state=RUNNABLE; OpenRegionProcedure 2eb8e61a805957035ab70435db5bb74a, server=jenkins-hbase4.apache.org,33545,1689898450890}] 2023-07-21 00:14:17,348 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=36, ppid=23, state=RUNNABLE; OpenRegionProcedure 88d17b148c0e8b134b44eb23bfcccd9a, server=jenkins-hbase4.apache.org,42163,1689898450682}] 2023-07-21 00:14:17,349 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=37, ppid=25, state=RUNNABLE; OpenRegionProcedure e16af981d1d2c55d6cb7b7d8530a2484, server=jenkins-hbase4.apache.org,33545,1689898450890}] 2023-07-21 00:14:17,494 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,33545,1689898450890 2023-07-21 00:14:17,494 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 00:14:17,497 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,42163,1689898450682 2023-07-21 00:14:17,497 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 00:14:17,498 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53252, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 00:14:17,523 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689898455667.e16af981d1d2c55d6cb7b7d8530a2484. 2023-07-21 00:14:17,523 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e16af981d1d2c55d6cb7b7d8530a2484, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689898455667.e16af981d1d2c55d6cb7b7d8530a2484.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-21 00:14:17,525 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop e16af981d1d2c55d6cb7b7d8530a2484 2023-07-21 00:14:17,525 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689898455667.e16af981d1d2c55d6cb7b7d8530a2484.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:17,525 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e16af981d1d2c55d6cb7b7d8530a2484 2023-07-21 00:14:17,525 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e16af981d1d2c55d6cb7b7d8530a2484 2023-07-21 00:14:17,525 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41182, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 00:14:17,531 INFO [StoreOpener-e16af981d1d2c55d6cb7b7d8530a2484-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region e16af981d1d2c55d6cb7b7d8530a2484 2023-07-21 00:14:17,532 DEBUG [StoreOpener-e16af981d1d2c55d6cb7b7d8530a2484-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/e16af981d1d2c55d6cb7b7d8530a2484/f 2023-07-21 00:14:17,532 DEBUG [StoreOpener-e16af981d1d2c55d6cb7b7d8530a2484-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/e16af981d1d2c55d6cb7b7d8530a2484/f 2023-07-21 00:14:17,533 INFO [StoreOpener-e16af981d1d2c55d6cb7b7d8530a2484-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e16af981d1d2c55d6cb7b7d8530a2484 columnFamilyName f 2023-07-21 00:14:17,534 INFO [StoreOpener-e16af981d1d2c55d6cb7b7d8530a2484-1] regionserver.HStore(310): Store=e16af981d1d2c55d6cb7b7d8530a2484/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:17,539 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689898455667.7daf50517b66988d37d6b7bb2860ecd8. 2023-07-21 00:14:17,540 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7daf50517b66988d37d6b7bb2860ecd8, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689898455667.7daf50517b66988d37d6b7bb2860ecd8.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-21 00:14:17,540 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 7daf50517b66988d37d6b7bb2860ecd8 2023-07-21 00:14:17,540 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689898455667.7daf50517b66988d37d6b7bb2860ecd8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:17,540 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7daf50517b66988d37d6b7bb2860ecd8 2023-07-21 00:14:17,541 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7daf50517b66988d37d6b7bb2860ecd8 2023-07-21 00:14:17,542 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/e16af981d1d2c55d6cb7b7d8530a2484 2023-07-21 00:14:17,543 INFO [StoreOpener-7daf50517b66988d37d6b7bb2860ecd8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 7daf50517b66988d37d6b7bb2860ecd8 2023-07-21 00:14:17,544 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/e16af981d1d2c55d6cb7b7d8530a2484 2023-07-21 00:14:17,546 DEBUG [StoreOpener-7daf50517b66988d37d6b7bb2860ecd8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/7daf50517b66988d37d6b7bb2860ecd8/f 2023-07-21 00:14:17,546 DEBUG [StoreOpener-7daf50517b66988d37d6b7bb2860ecd8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/7daf50517b66988d37d6b7bb2860ecd8/f 2023-07-21 00:14:17,548 INFO [StoreOpener-7daf50517b66988d37d6b7bb2860ecd8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7daf50517b66988d37d6b7bb2860ecd8 columnFamilyName f 2023-07-21 00:14:17,551 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e16af981d1d2c55d6cb7b7d8530a2484 2023-07-21 00:14:17,552 INFO [StoreOpener-7daf50517b66988d37d6b7bb2860ecd8-1] regionserver.HStore(310): Store=7daf50517b66988d37d6b7bb2860ecd8/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:17,557 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e16af981d1d2c55d6cb7b7d8530a2484; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10087264000, jitterRate=-0.06055033206939697}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 00:14:17,557 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/7daf50517b66988d37d6b7bb2860ecd8 2023-07-21 00:14:17,557 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e16af981d1d2c55d6cb7b7d8530a2484: 2023-07-21 00:14:17,559 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689898455667.e16af981d1d2c55d6cb7b7d8530a2484., pid=37, masterSystemTime=1689898457493 2023-07-21 00:14:17,561 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/7daf50517b66988d37d6b7bb2860ecd8 2023-07-21 00:14:17,577 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7daf50517b66988d37d6b7bb2860ecd8 2023-07-21 00:14:17,578 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7daf50517b66988d37d6b7bb2860ecd8; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10912922240, jitterRate=0.016345083713531494}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 00:14:17,579 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7daf50517b66988d37d6b7bb2860ecd8: 2023-07-21 00:14:17,583 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=e16af981d1d2c55d6cb7b7d8530a2484, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33545,1689898450890 2023-07-21 00:14:17,583 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689898455667.e16af981d1d2c55d6cb7b7d8530a2484.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689898457583"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689898457583"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689898457583"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689898457583"}]},"ts":"1689898457583"} 2023-07-21 00:14:17,586 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689898455667.e16af981d1d2c55d6cb7b7d8530a2484. 2023-07-21 00:14:17,588 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689898455667.e16af981d1d2c55d6cb7b7d8530a2484. 2023-07-21 00:14:17,588 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689898455667.2eb8e61a805957035ab70435db5bb74a. 2023-07-21 00:14:17,588 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689898455667.7daf50517b66988d37d6b7bb2860ecd8., pid=34, masterSystemTime=1689898457497 2023-07-21 00:14:17,589 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2eb8e61a805957035ab70435db5bb74a, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689898455667.2eb8e61a805957035ab70435db5bb74a.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-21 00:14:17,591 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 2eb8e61a805957035ab70435db5bb74a 2023-07-21 00:14:17,592 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689898455667.2eb8e61a805957035ab70435db5bb74a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:17,592 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2eb8e61a805957035ab70435db5bb74a 2023-07-21 00:14:17,592 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2eb8e61a805957035ab70435db5bb74a 2023-07-21 00:14:17,595 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689898455667.7daf50517b66988d37d6b7bb2860ecd8. 2023-07-21 00:14:17,596 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689898455667.7daf50517b66988d37d6b7bb2860ecd8. 2023-07-21 00:14:17,597 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689898455667.88d17b148c0e8b134b44eb23bfcccd9a. 2023-07-21 00:14:17,597 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=37, resume processing ppid=25 2023-07-21 00:14:17,597 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 88d17b148c0e8b134b44eb23bfcccd9a, NAME => 'Group_testTableMoveTruncateAndDrop,,1689898455667.88d17b148c0e8b134b44eb23bfcccd9a.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-21 00:14:17,597 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=25, state=SUCCESS; OpenRegionProcedure e16af981d1d2c55d6cb7b7d8530a2484, server=jenkins-hbase4.apache.org,33545,1689898450890 in 239 msec 2023-07-21 00:14:17,597 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=7daf50517b66988d37d6b7bb2860ecd8, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,42163,1689898450682 2023-07-21 00:14:17,597 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 88d17b148c0e8b134b44eb23bfcccd9a 2023-07-21 00:14:17,597 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689898455667.7daf50517b66988d37d6b7bb2860ecd8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689898457597"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689898457597"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689898457597"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689898457597"}]},"ts":"1689898457597"} 2023-07-21 00:14:17,597 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689898455667.88d17b148c0e8b134b44eb23bfcccd9a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:17,598 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 88d17b148c0e8b134b44eb23bfcccd9a 2023-07-21 00:14:17,598 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 88d17b148c0e8b134b44eb23bfcccd9a 2023-07-21 00:14:17,600 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=25, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e16af981d1d2c55d6cb7b7d8530a2484, REOPEN/MOVE in 644 msec 2023-07-21 00:14:17,602 INFO [StoreOpener-2eb8e61a805957035ab70435db5bb74a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 2eb8e61a805957035ab70435db5bb74a 2023-07-21 00:14:17,604 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=34, resume processing ppid=24 2023-07-21 00:14:17,605 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=34, ppid=24, state=SUCCESS; OpenRegionProcedure 7daf50517b66988d37d6b7bb2860ecd8, server=jenkins-hbase4.apache.org,42163,1689898450682 in 261 msec 2023-07-21 00:14:17,604 INFO [StoreOpener-88d17b148c0e8b134b44eb23bfcccd9a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 88d17b148c0e8b134b44eb23bfcccd9a 2023-07-21 00:14:17,604 DEBUG [StoreOpener-2eb8e61a805957035ab70435db5bb74a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/2eb8e61a805957035ab70435db5bb74a/f 2023-07-21 00:14:17,605 DEBUG [StoreOpener-2eb8e61a805957035ab70435db5bb74a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/2eb8e61a805957035ab70435db5bb74a/f 2023-07-21 00:14:17,606 INFO [StoreOpener-2eb8e61a805957035ab70435db5bb74a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2eb8e61a805957035ab70435db5bb74a columnFamilyName f 2023-07-21 00:14:17,607 INFO [StoreOpener-2eb8e61a805957035ab70435db5bb74a-1] regionserver.HStore(310): Store=2eb8e61a805957035ab70435db5bb74a/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:17,608 DEBUG [StoreOpener-88d17b148c0e8b134b44eb23bfcccd9a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/88d17b148c0e8b134b44eb23bfcccd9a/f 2023-07-21 00:14:17,608 DEBUG [StoreOpener-88d17b148c0e8b134b44eb23bfcccd9a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/88d17b148c0e8b134b44eb23bfcccd9a/f 2023-07-21 00:14:17,608 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7daf50517b66988d37d6b7bb2860ecd8, REOPEN/MOVE in 659 msec 2023-07-21 00:14:17,609 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/2eb8e61a805957035ab70435db5bb74a 2023-07-21 00:14:17,609 INFO [StoreOpener-88d17b148c0e8b134b44eb23bfcccd9a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 88d17b148c0e8b134b44eb23bfcccd9a columnFamilyName f 2023-07-21 00:14:17,610 INFO [StoreOpener-88d17b148c0e8b134b44eb23bfcccd9a-1] regionserver.HStore(310): Store=88d17b148c0e8b134b44eb23bfcccd9a/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:17,611 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/2eb8e61a805957035ab70435db5bb74a 2023-07-21 00:14:17,611 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/88d17b148c0e8b134b44eb23bfcccd9a 2023-07-21 00:14:17,613 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/88d17b148c0e8b134b44eb23bfcccd9a 2023-07-21 00:14:17,616 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2eb8e61a805957035ab70435db5bb74a 2023-07-21 00:14:17,618 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2eb8e61a805957035ab70435db5bb74a; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10912827360, jitterRate=0.016336247324943542}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 00:14:17,618 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2eb8e61a805957035ab70435db5bb74a: 2023-07-21 00:14:17,618 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 88d17b148c0e8b134b44eb23bfcccd9a 2023-07-21 00:14:17,620 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689898455667.2eb8e61a805957035ab70435db5bb74a., pid=35, masterSystemTime=1689898457493 2023-07-21 00:14:17,620 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 88d17b148c0e8b134b44eb23bfcccd9a; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9538430400, jitterRate=-0.11166444420814514}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 00:14:17,620 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 88d17b148c0e8b134b44eb23bfcccd9a: 2023-07-21 00:14:17,623 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689898455667.2eb8e61a805957035ab70435db5bb74a. 2023-07-21 00:14:17,623 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689898455667.2eb8e61a805957035ab70435db5bb74a. 2023-07-21 00:14:17,623 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689898455667.88d17b148c0e8b134b44eb23bfcccd9a., pid=36, masterSystemTime=1689898457497 2023-07-21 00:14:17,623 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689898455667.8bc8759a139310371477de7f765de721. 2023-07-21 00:14:17,623 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8bc8759a139310371477de7f765de721, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689898455667.8bc8759a139310371477de7f765de721.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-21 00:14:17,624 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 8bc8759a139310371477de7f765de721 2023-07-21 00:14:17,624 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=2eb8e61a805957035ab70435db5bb74a, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33545,1689898450890 2023-07-21 00:14:17,624 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689898455667.8bc8759a139310371477de7f765de721.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:17,624 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8bc8759a139310371477de7f765de721 2023-07-21 00:14:17,624 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689898455667.2eb8e61a805957035ab70435db5bb74a.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689898457623"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689898457623"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689898457623"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689898457623"}]},"ts":"1689898457623"} 2023-07-21 00:14:17,624 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8bc8759a139310371477de7f765de721 2023-07-21 00:14:17,626 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689898455667.88d17b148c0e8b134b44eb23bfcccd9a. 2023-07-21 00:14:17,626 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689898455667.88d17b148c0e8b134b44eb23bfcccd9a. 2023-07-21 00:14:17,627 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=88d17b148c0e8b134b44eb23bfcccd9a, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,42163,1689898450682 2023-07-21 00:14:17,628 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689898455667.88d17b148c0e8b134b44eb23bfcccd9a.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689898457627"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689898457627"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689898457627"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689898457627"}]},"ts":"1689898457627"} 2023-07-21 00:14:17,633 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=35, resume processing ppid=27 2023-07-21 00:14:17,633 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=35, ppid=27, state=SUCCESS; OpenRegionProcedure 2eb8e61a805957035ab70435db5bb74a, server=jenkins-hbase4.apache.org,33545,1689898450890 in 286 msec 2023-07-21 00:14:17,635 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=36, resume processing ppid=23 2023-07-21 00:14:17,635 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=36, ppid=23, state=SUCCESS; OpenRegionProcedure 88d17b148c0e8b134b44eb23bfcccd9a, server=jenkins-hbase4.apache.org,42163,1689898450682 in 283 msec 2023-07-21 00:14:17,636 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=27, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2eb8e61a805957035ab70435db5bb74a, REOPEN/MOVE in 676 msec 2023-07-21 00:14:17,638 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=88d17b148c0e8b134b44eb23bfcccd9a, REOPEN/MOVE in 693 msec 2023-07-21 00:14:17,639 INFO [StoreOpener-8bc8759a139310371477de7f765de721-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 8bc8759a139310371477de7f765de721 2023-07-21 00:14:17,641 DEBUG [StoreOpener-8bc8759a139310371477de7f765de721-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/8bc8759a139310371477de7f765de721/f 2023-07-21 00:14:17,641 DEBUG [StoreOpener-8bc8759a139310371477de7f765de721-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/8bc8759a139310371477de7f765de721/f 2023-07-21 00:14:17,642 INFO [StoreOpener-8bc8759a139310371477de7f765de721-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8bc8759a139310371477de7f765de721 columnFamilyName f 2023-07-21 00:14:17,643 INFO [StoreOpener-8bc8759a139310371477de7f765de721-1] regionserver.HStore(310): Store=8bc8759a139310371477de7f765de721/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:17,644 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/8bc8759a139310371477de7f765de721 2023-07-21 00:14:17,646 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/8bc8759a139310371477de7f765de721 2023-07-21 00:14:17,651 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8bc8759a139310371477de7f765de721 2023-07-21 00:14:17,653 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8bc8759a139310371477de7f765de721; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11468540800, jitterRate=0.06809109449386597}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 00:14:17,653 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8bc8759a139310371477de7f765de721: 2023-07-21 00:14:17,654 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689898455667.8bc8759a139310371477de7f765de721., pid=33, masterSystemTime=1689898457493 2023-07-21 00:14:17,659 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689898455667.8bc8759a139310371477de7f765de721. 2023-07-21 00:14:17,659 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689898455667.8bc8759a139310371477de7f765de721. 2023-07-21 00:14:17,659 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=8bc8759a139310371477de7f765de721, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33545,1689898450890 2023-07-21 00:14:17,659 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689898455667.8bc8759a139310371477de7f765de721.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689898457659"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689898457659"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689898457659"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689898457659"}]},"ts":"1689898457659"} 2023-07-21 00:14:17,665 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=33, resume processing ppid=30 2023-07-21 00:14:17,665 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=33, ppid=30, state=SUCCESS; OpenRegionProcedure 8bc8759a139310371477de7f765de721, server=jenkins-hbase4.apache.org,33545,1689898450890 in 326 msec 2023-07-21 00:14:17,668 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=30, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8bc8759a139310371477de7f765de721, REOPEN/MOVE in 691 msec 2023-07-21 00:14:17,978 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] procedure.ProcedureSyncWait(216): waitFor pid=23 2023-07-21 00:14:17,978 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testTableMoveTruncateAndDrop] moved to target group Group_testTableMoveTruncateAndDrop_761444744. 2023-07-21 00:14:17,978 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 00:14:17,986 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:17,986 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:17,991 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-21 00:14:17,991 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 00:14:17,992 INFO [Listener at localhost/41495] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:18,000 INFO [Listener at localhost/41495] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-21 00:14:18,005 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-21 00:14:18,013 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] procedure2.ProcedureExecutor(1029): Stored pid=38, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 00:14:18,018 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689898458017"}]},"ts":"1689898458017"} 2023-07-21 00:14:18,019 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=38 2023-07-21 00:14:18,020 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-21 00:14:18,021 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-21 00:14:18,024 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=39, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=88d17b148c0e8b134b44eb23bfcccd9a, UNASSIGN}, {pid=40, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7daf50517b66988d37d6b7bb2860ecd8, UNASSIGN}, {pid=41, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e16af981d1d2c55d6cb7b7d8530a2484, UNASSIGN}, {pid=42, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2eb8e61a805957035ab70435db5bb74a, UNASSIGN}, {pid=43, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8bc8759a139310371477de7f765de721, UNASSIGN}] 2023-07-21 00:14:18,026 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=42, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2eb8e61a805957035ab70435db5bb74a, UNASSIGN 2023-07-21 00:14:18,027 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=43, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8bc8759a139310371477de7f765de721, UNASSIGN 2023-07-21 00:14:18,027 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=41, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e16af981d1d2c55d6cb7b7d8530a2484, UNASSIGN 2023-07-21 00:14:18,027 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=40, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7daf50517b66988d37d6b7bb2860ecd8, UNASSIGN 2023-07-21 00:14:18,027 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=39, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=88d17b148c0e8b134b44eb23bfcccd9a, UNASSIGN 2023-07-21 00:14:18,028 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=2eb8e61a805957035ab70435db5bb74a, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33545,1689898450890 2023-07-21 00:14:18,028 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=8bc8759a139310371477de7f765de721, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33545,1689898450890 2023-07-21 00:14:18,029 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689898455667.2eb8e61a805957035ab70435db5bb74a.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689898458028"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898458028"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898458028"}]},"ts":"1689898458028"} 2023-07-21 00:14:18,029 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=40 updating hbase:meta row=7daf50517b66988d37d6b7bb2860ecd8, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42163,1689898450682 2023-07-21 00:14:18,029 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=41 updating hbase:meta row=e16af981d1d2c55d6cb7b7d8530a2484, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33545,1689898450890 2023-07-21 00:14:18,029 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689898455667.8bc8759a139310371477de7f765de721.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689898458028"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898458028"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898458028"}]},"ts":"1689898458028"} 2023-07-21 00:14:18,029 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689898455667.e16af981d1d2c55d6cb7b7d8530a2484.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689898458029"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898458029"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898458029"}]},"ts":"1689898458029"} 2023-07-21 00:14:18,029 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=39 updating hbase:meta row=88d17b148c0e8b134b44eb23bfcccd9a, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42163,1689898450682 2023-07-21 00:14:18,029 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689898455667.7daf50517b66988d37d6b7bb2860ecd8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689898458029"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898458029"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898458029"}]},"ts":"1689898458029"} 2023-07-21 00:14:18,029 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689898455667.88d17b148c0e8b134b44eb23bfcccd9a.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689898458029"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898458029"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898458029"}]},"ts":"1689898458029"} 2023-07-21 00:14:18,031 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=44, ppid=42, state=RUNNABLE; CloseRegionProcedure 2eb8e61a805957035ab70435db5bb74a, server=jenkins-hbase4.apache.org,33545,1689898450890}] 2023-07-21 00:14:18,032 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=45, ppid=43, state=RUNNABLE; CloseRegionProcedure 8bc8759a139310371477de7f765de721, server=jenkins-hbase4.apache.org,33545,1689898450890}] 2023-07-21 00:14:18,034 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=46, ppid=41, state=RUNNABLE; CloseRegionProcedure e16af981d1d2c55d6cb7b7d8530a2484, server=jenkins-hbase4.apache.org,33545,1689898450890}] 2023-07-21 00:14:18,036 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=47, ppid=40, state=RUNNABLE; CloseRegionProcedure 7daf50517b66988d37d6b7bb2860ecd8, server=jenkins-hbase4.apache.org,42163,1689898450682}] 2023-07-21 00:14:18,037 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=48, ppid=39, state=RUNNABLE; CloseRegionProcedure 88d17b148c0e8b134b44eb23bfcccd9a, server=jenkins-hbase4.apache.org,42163,1689898450682}] 2023-07-21 00:14:18,121 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=38 2023-07-21 00:14:18,185 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e16af981d1d2c55d6cb7b7d8530a2484 2023-07-21 00:14:18,187 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e16af981d1d2c55d6cb7b7d8530a2484, disabling compactions & flushes 2023-07-21 00:14:18,187 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689898455667.e16af981d1d2c55d6cb7b7d8530a2484. 2023-07-21 00:14:18,187 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689898455667.e16af981d1d2c55d6cb7b7d8530a2484. 2023-07-21 00:14:18,187 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689898455667.e16af981d1d2c55d6cb7b7d8530a2484. after waiting 0 ms 2023-07-21 00:14:18,187 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689898455667.e16af981d1d2c55d6cb7b7d8530a2484. 2023-07-21 00:14:18,199 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/e16af981d1d2c55d6cb7b7d8530a2484/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 00:14:18,199 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 7daf50517b66988d37d6b7bb2860ecd8 2023-07-21 00:14:18,200 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7daf50517b66988d37d6b7bb2860ecd8, disabling compactions & flushes 2023-07-21 00:14:18,200 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689898455667.7daf50517b66988d37d6b7bb2860ecd8. 2023-07-21 00:14:18,201 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689898455667.7daf50517b66988d37d6b7bb2860ecd8. 2023-07-21 00:14:18,201 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689898455667.7daf50517b66988d37d6b7bb2860ecd8. after waiting 0 ms 2023-07-21 00:14:18,201 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689898455667.7daf50517b66988d37d6b7bb2860ecd8. 2023-07-21 00:14:18,209 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689898455667.e16af981d1d2c55d6cb7b7d8530a2484. 2023-07-21 00:14:18,209 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e16af981d1d2c55d6cb7b7d8530a2484: 2023-07-21 00:14:18,213 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/7daf50517b66988d37d6b7bb2860ecd8/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 00:14:18,214 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689898455667.7daf50517b66988d37d6b7bb2860ecd8. 2023-07-21 00:14:18,214 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7daf50517b66988d37d6b7bb2860ecd8: 2023-07-21 00:14:18,214 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e16af981d1d2c55d6cb7b7d8530a2484 2023-07-21 00:14:18,215 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 8bc8759a139310371477de7f765de721 2023-07-21 00:14:18,216 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8bc8759a139310371477de7f765de721, disabling compactions & flushes 2023-07-21 00:14:18,216 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689898455667.8bc8759a139310371477de7f765de721. 2023-07-21 00:14:18,216 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689898455667.8bc8759a139310371477de7f765de721. 2023-07-21 00:14:18,216 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689898455667.8bc8759a139310371477de7f765de721. after waiting 0 ms 2023-07-21 00:14:18,216 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689898455667.8bc8759a139310371477de7f765de721. 2023-07-21 00:14:18,216 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=41 updating hbase:meta row=e16af981d1d2c55d6cb7b7d8530a2484, regionState=CLOSED 2023-07-21 00:14:18,216 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689898455667.e16af981d1d2c55d6cb7b7d8530a2484.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689898458216"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898458216"}]},"ts":"1689898458216"} 2023-07-21 00:14:18,218 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 7daf50517b66988d37d6b7bb2860ecd8 2023-07-21 00:14:18,218 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 88d17b148c0e8b134b44eb23bfcccd9a 2023-07-21 00:14:18,219 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 88d17b148c0e8b134b44eb23bfcccd9a, disabling compactions & flushes 2023-07-21 00:14:18,219 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689898455667.88d17b148c0e8b134b44eb23bfcccd9a. 2023-07-21 00:14:18,219 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689898455667.88d17b148c0e8b134b44eb23bfcccd9a. 2023-07-21 00:14:18,219 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689898455667.88d17b148c0e8b134b44eb23bfcccd9a. after waiting 0 ms 2023-07-21 00:14:18,219 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689898455667.88d17b148c0e8b134b44eb23bfcccd9a. 2023-07-21 00:14:18,220 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=40 updating hbase:meta row=7daf50517b66988d37d6b7bb2860ecd8, regionState=CLOSED 2023-07-21 00:14:18,220 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689898455667.7daf50517b66988d37d6b7bb2860ecd8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689898458220"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898458220"}]},"ts":"1689898458220"} 2023-07-21 00:14:18,227 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/8bc8759a139310371477de7f765de721/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 00:14:18,228 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689898455667.8bc8759a139310371477de7f765de721. 2023-07-21 00:14:18,228 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8bc8759a139310371477de7f765de721: 2023-07-21 00:14:18,235 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 8bc8759a139310371477de7f765de721 2023-07-21 00:14:18,235 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 2eb8e61a805957035ab70435db5bb74a 2023-07-21 00:14:18,236 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2eb8e61a805957035ab70435db5bb74a, disabling compactions & flushes 2023-07-21 00:14:18,236 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689898455667.2eb8e61a805957035ab70435db5bb74a. 2023-07-21 00:14:18,236 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689898455667.2eb8e61a805957035ab70435db5bb74a. 2023-07-21 00:14:18,236 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689898455667.2eb8e61a805957035ab70435db5bb74a. after waiting 0 ms 2023-07-21 00:14:18,237 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689898455667.2eb8e61a805957035ab70435db5bb74a. 2023-07-21 00:14:18,238 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/88d17b148c0e8b134b44eb23bfcccd9a/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 00:14:18,239 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=46, resume processing ppid=41 2023-07-21 00:14:18,239 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=8bc8759a139310371477de7f765de721, regionState=CLOSED 2023-07-21 00:14:18,239 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=41, state=SUCCESS; CloseRegionProcedure e16af981d1d2c55d6cb7b7d8530a2484, server=jenkins-hbase4.apache.org,33545,1689898450890 in 192 msec 2023-07-21 00:14:18,239 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689898455667.8bc8759a139310371477de7f765de721.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689898458239"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898458239"}]},"ts":"1689898458239"} 2023-07-21 00:14:18,240 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689898455667.88d17b148c0e8b134b44eb23bfcccd9a. 2023-07-21 00:14:18,240 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 88d17b148c0e8b134b44eb23bfcccd9a: 2023-07-21 00:14:18,242 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=47, resume processing ppid=40 2023-07-21 00:14:18,242 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=47, ppid=40, state=SUCCESS; CloseRegionProcedure 7daf50517b66988d37d6b7bb2860ecd8, server=jenkins-hbase4.apache.org,42163,1689898450682 in 192 msec 2023-07-21 00:14:18,244 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/2eb8e61a805957035ab70435db5bb74a/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 00:14:18,245 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 88d17b148c0e8b134b44eb23bfcccd9a 2023-07-21 00:14:18,245 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=41, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e16af981d1d2c55d6cb7b7d8530a2484, UNASSIGN in 215 msec 2023-07-21 00:14:18,245 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689898455667.2eb8e61a805957035ab70435db5bb74a. 2023-07-21 00:14:18,245 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2eb8e61a805957035ab70435db5bb74a: 2023-07-21 00:14:18,246 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7daf50517b66988d37d6b7bb2860ecd8, UNASSIGN in 218 msec 2023-07-21 00:14:18,246 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=39 updating hbase:meta row=88d17b148c0e8b134b44eb23bfcccd9a, regionState=CLOSED 2023-07-21 00:14:18,246 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689898455667.88d17b148c0e8b134b44eb23bfcccd9a.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689898458246"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898458246"}]},"ts":"1689898458246"} 2023-07-21 00:14:18,247 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=45, resume processing ppid=43 2023-07-21 00:14:18,247 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=45, ppid=43, state=SUCCESS; CloseRegionProcedure 8bc8759a139310371477de7f765de721, server=jenkins-hbase4.apache.org,33545,1689898450890 in 210 msec 2023-07-21 00:14:18,250 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 2eb8e61a805957035ab70435db5bb74a 2023-07-21 00:14:18,251 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=2eb8e61a805957035ab70435db5bb74a, regionState=CLOSED 2023-07-21 00:14:18,251 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689898455667.2eb8e61a805957035ab70435db5bb74a.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689898458251"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898458251"}]},"ts":"1689898458251"} 2023-07-21 00:14:18,253 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8bc8759a139310371477de7f765de721, UNASSIGN in 224 msec 2023-07-21 00:14:18,256 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=48, resume processing ppid=39 2023-07-21 00:14:18,257 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=39, state=SUCCESS; CloseRegionProcedure 88d17b148c0e8b134b44eb23bfcccd9a, server=jenkins-hbase4.apache.org,42163,1689898450682 in 215 msec 2023-07-21 00:14:18,258 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=44, resume processing ppid=42 2023-07-21 00:14:18,258 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=44, ppid=42, state=SUCCESS; CloseRegionProcedure 2eb8e61a805957035ab70435db5bb74a, server=jenkins-hbase4.apache.org,33545,1689898450890 in 223 msec 2023-07-21 00:14:18,259 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=39, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=88d17b148c0e8b134b44eb23bfcccd9a, UNASSIGN in 233 msec 2023-07-21 00:14:18,262 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=42, resume processing ppid=38 2023-07-21 00:14:18,262 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2eb8e61a805957035ab70435db5bb74a, UNASSIGN in 234 msec 2023-07-21 00:14:18,264 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689898458264"}]},"ts":"1689898458264"} 2023-07-21 00:14:18,266 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-21 00:14:18,269 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-21 00:14:18,273 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=38, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 263 msec 2023-07-21 00:14:18,323 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=38 2023-07-21 00:14:18,323 INFO [Listener at localhost/41495] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 38 completed 2023-07-21 00:14:18,325 INFO [Listener at localhost/41495] client.HBaseAdmin$13(770): Started truncating Group_testTableMoveTruncateAndDrop 2023-07-21 00:14:18,332 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.HMaster$6(2260): Client=jenkins//172.31.14.131 truncate Group_testTableMoveTruncateAndDrop 2023-07-21 00:14:18,344 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] procedure2.ProcedureExecutor(1029): Stored pid=49, state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) 2023-07-21 00:14:18,347 DEBUG [PEWorker-2] procedure.TruncateTableProcedure(87): waiting for 'Group_testTableMoveTruncateAndDrop' regions in transition 2023-07-21 00:14:18,349 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-21 00:14:18,361 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7daf50517b66988d37d6b7bb2860ecd8 2023-07-21 00:14:18,361 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/88d17b148c0e8b134b44eb23bfcccd9a 2023-07-21 00:14:18,361 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2eb8e61a805957035ab70435db5bb74a 2023-07-21 00:14:18,361 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8bc8759a139310371477de7f765de721 2023-07-21 00:14:18,361 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e16af981d1d2c55d6cb7b7d8530a2484 2023-07-21 00:14:18,366 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8bc8759a139310371477de7f765de721/f, FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8bc8759a139310371477de7f765de721/recovered.edits] 2023-07-21 00:14:18,366 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e16af981d1d2c55d6cb7b7d8530a2484/f, FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e16af981d1d2c55d6cb7b7d8530a2484/recovered.edits] 2023-07-21 00:14:18,366 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/88d17b148c0e8b134b44eb23bfcccd9a/f, FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/88d17b148c0e8b134b44eb23bfcccd9a/recovered.edits] 2023-07-21 00:14:18,366 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7daf50517b66988d37d6b7bb2860ecd8/f, FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7daf50517b66988d37d6b7bb2860ecd8/recovered.edits] 2023-07-21 00:14:18,366 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2eb8e61a805957035ab70435db5bb74a/f, FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2eb8e61a805957035ab70435db5bb74a/recovered.edits] 2023-07-21 00:14:18,385 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2eb8e61a805957035ab70435db5bb74a/recovered.edits/7.seqid to hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/archive/data/default/Group_testTableMoveTruncateAndDrop/2eb8e61a805957035ab70435db5bb74a/recovered.edits/7.seqid 2023-07-21 00:14:18,385 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e16af981d1d2c55d6cb7b7d8530a2484/recovered.edits/7.seqid to hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/archive/data/default/Group_testTableMoveTruncateAndDrop/e16af981d1d2c55d6cb7b7d8530a2484/recovered.edits/7.seqid 2023-07-21 00:14:18,385 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8bc8759a139310371477de7f765de721/recovered.edits/7.seqid to hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/archive/data/default/Group_testTableMoveTruncateAndDrop/8bc8759a139310371477de7f765de721/recovered.edits/7.seqid 2023-07-21 00:14:18,386 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7daf50517b66988d37d6b7bb2860ecd8/recovered.edits/7.seqid to hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/archive/data/default/Group_testTableMoveTruncateAndDrop/7daf50517b66988d37d6b7bb2860ecd8/recovered.edits/7.seqid 2023-07-21 00:14:18,387 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2eb8e61a805957035ab70435db5bb74a 2023-07-21 00:14:18,387 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e16af981d1d2c55d6cb7b7d8530a2484 2023-07-21 00:14:18,388 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8bc8759a139310371477de7f765de721 2023-07-21 00:14:18,388 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7daf50517b66988d37d6b7bb2860ecd8 2023-07-21 00:14:18,388 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/88d17b148c0e8b134b44eb23bfcccd9a/recovered.edits/7.seqid to hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/archive/data/default/Group_testTableMoveTruncateAndDrop/88d17b148c0e8b134b44eb23bfcccd9a/recovered.edits/7.seqid 2023-07-21 00:14:18,389 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/88d17b148c0e8b134b44eb23bfcccd9a 2023-07-21 00:14:18,389 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-21 00:14:18,425 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-21 00:14:18,431 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-21 00:14:18,432 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-21 00:14:18,432 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689898455667.88d17b148c0e8b134b44eb23bfcccd9a.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689898458432"}]},"ts":"9223372036854775807"} 2023-07-21 00:14:18,432 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689898455667.7daf50517b66988d37d6b7bb2860ecd8.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689898458432"}]},"ts":"9223372036854775807"} 2023-07-21 00:14:18,432 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689898455667.e16af981d1d2c55d6cb7b7d8530a2484.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689898458432"}]},"ts":"9223372036854775807"} 2023-07-21 00:14:18,432 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689898455667.2eb8e61a805957035ab70435db5bb74a.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689898458432"}]},"ts":"9223372036854775807"} 2023-07-21 00:14:18,432 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689898455667.8bc8759a139310371477de7f765de721.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689898458432"}]},"ts":"9223372036854775807"} 2023-07-21 00:14:18,436 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-21 00:14:18,436 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 88d17b148c0e8b134b44eb23bfcccd9a, NAME => 'Group_testTableMoveTruncateAndDrop,,1689898455667.88d17b148c0e8b134b44eb23bfcccd9a.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 7daf50517b66988d37d6b7bb2860ecd8, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689898455667.7daf50517b66988d37d6b7bb2860ecd8.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => e16af981d1d2c55d6cb7b7d8530a2484, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689898455667.e16af981d1d2c55d6cb7b7d8530a2484.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 2eb8e61a805957035ab70435db5bb74a, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689898455667.2eb8e61a805957035ab70435db5bb74a.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 8bc8759a139310371477de7f765de721, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689898455667.8bc8759a139310371477de7f765de721.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-21 00:14:18,436 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-21 00:14:18,436 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689898458436"}]},"ts":"9223372036854775807"} 2023-07-21 00:14:18,439 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-21 00:14:18,449 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/34263a8c75ee50d30ee24cdf5ec58cac 2023-07-21 00:14:18,449 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/04bd849ead499c40f396b236c2baabbc 2023-07-21 00:14:18,449 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d1876148242f8ec205d85e23fec8fa9a 2023-07-21 00:14:18,449 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fc9446dc0a64f3b03140c19b8ca53334 2023-07-21 00:14:18,449 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/782ab6e85c5338f428ea481539b522f7 2023-07-21 00:14:18,450 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/04bd849ead499c40f396b236c2baabbc empty. 2023-07-21 00:14:18,450 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/34263a8c75ee50d30ee24cdf5ec58cac empty. 2023-07-21 00:14:18,450 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d1876148242f8ec205d85e23fec8fa9a empty. 2023-07-21 00:14:18,450 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fc9446dc0a64f3b03140c19b8ca53334 empty. 2023-07-21 00:14:18,450 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/782ab6e85c5338f428ea481539b522f7 empty. 2023-07-21 00:14:18,451 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/04bd849ead499c40f396b236c2baabbc 2023-07-21 00:14:18,451 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fc9446dc0a64f3b03140c19b8ca53334 2023-07-21 00:14:18,451 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d1876148242f8ec205d85e23fec8fa9a 2023-07-21 00:14:18,451 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/34263a8c75ee50d30ee24cdf5ec58cac 2023-07-21 00:14:18,451 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/782ab6e85c5338f428ea481539b522f7 2023-07-21 00:14:18,451 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-21 00:14:18,451 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-21 00:14:18,478 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-21 00:14:18,486 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 34263a8c75ee50d30ee24cdf5ec58cac, NAME => 'Group_testTableMoveTruncateAndDrop,,1689898458391.34263a8c75ee50d30ee24cdf5ec58cac.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp 2023-07-21 00:14:18,486 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 782ab6e85c5338f428ea481539b522f7, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689898458391.782ab6e85c5338f428ea481539b522f7.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp 2023-07-21 00:14:18,487 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => fc9446dc0a64f3b03140c19b8ca53334, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689898458391.fc9446dc0a64f3b03140c19b8ca53334.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp 2023-07-21 00:14:18,588 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689898458391.34263a8c75ee50d30ee24cdf5ec58cac.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:18,588 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 34263a8c75ee50d30ee24cdf5ec58cac, disabling compactions & flushes 2023-07-21 00:14:18,588 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689898458391.34263a8c75ee50d30ee24cdf5ec58cac. 2023-07-21 00:14:18,588 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689898458391.34263a8c75ee50d30ee24cdf5ec58cac. 2023-07-21 00:14:18,588 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689898458391.34263a8c75ee50d30ee24cdf5ec58cac. after waiting 0 ms 2023-07-21 00:14:18,588 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689898458391.34263a8c75ee50d30ee24cdf5ec58cac. 2023-07-21 00:14:18,588 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689898458391.34263a8c75ee50d30ee24cdf5ec58cac. 2023-07-21 00:14:18,588 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 34263a8c75ee50d30ee24cdf5ec58cac: 2023-07-21 00:14:18,589 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 04bd849ead499c40f396b236c2baabbc, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689898458392.04bd849ead499c40f396b236c2baabbc.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp 2023-07-21 00:14:18,611 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689898458391.fc9446dc0a64f3b03140c19b8ca53334.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:18,611 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing fc9446dc0a64f3b03140c19b8ca53334, disabling compactions & flushes 2023-07-21 00:14:18,611 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689898458391.fc9446dc0a64f3b03140c19b8ca53334. 2023-07-21 00:14:18,611 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689898458391.fc9446dc0a64f3b03140c19b8ca53334. 2023-07-21 00:14:18,611 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689898458391.fc9446dc0a64f3b03140c19b8ca53334. after waiting 0 ms 2023-07-21 00:14:18,611 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689898458391.fc9446dc0a64f3b03140c19b8ca53334. 2023-07-21 00:14:18,611 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689898458391.fc9446dc0a64f3b03140c19b8ca53334. 2023-07-21 00:14:18,611 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for fc9446dc0a64f3b03140c19b8ca53334: 2023-07-21 00:14:18,612 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => d1876148242f8ec205d85e23fec8fa9a, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689898458392.d1876148242f8ec205d85e23fec8fa9a.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp 2023-07-21 00:14:18,619 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689898458391.782ab6e85c5338f428ea481539b522f7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:18,619 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 782ab6e85c5338f428ea481539b522f7, disabling compactions & flushes 2023-07-21 00:14:18,619 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689898458391.782ab6e85c5338f428ea481539b522f7. 2023-07-21 00:14:18,619 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689898458391.782ab6e85c5338f428ea481539b522f7. 2023-07-21 00:14:18,619 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689898458391.782ab6e85c5338f428ea481539b522f7. after waiting 0 ms 2023-07-21 00:14:18,619 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689898458391.782ab6e85c5338f428ea481539b522f7. 2023-07-21 00:14:18,619 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689898458391.782ab6e85c5338f428ea481539b522f7. 2023-07-21 00:14:18,619 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 782ab6e85c5338f428ea481539b522f7: 2023-07-21 00:14:18,646 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689898458392.04bd849ead499c40f396b236c2baabbc.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:18,646 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 04bd849ead499c40f396b236c2baabbc, disabling compactions & flushes 2023-07-21 00:14:18,646 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689898458392.04bd849ead499c40f396b236c2baabbc. 2023-07-21 00:14:18,646 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689898458392.04bd849ead499c40f396b236c2baabbc. 2023-07-21 00:14:18,646 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689898458392.04bd849ead499c40f396b236c2baabbc. after waiting 0 ms 2023-07-21 00:14:18,646 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689898458392.04bd849ead499c40f396b236c2baabbc. 2023-07-21 00:14:18,646 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689898458392.04bd849ead499c40f396b236c2baabbc. 2023-07-21 00:14:18,646 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 04bd849ead499c40f396b236c2baabbc: 2023-07-21 00:14:18,653 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-21 00:14:18,674 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689898458392.d1876148242f8ec205d85e23fec8fa9a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:18,674 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing d1876148242f8ec205d85e23fec8fa9a, disabling compactions & flushes 2023-07-21 00:14:18,674 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689898458392.d1876148242f8ec205d85e23fec8fa9a. 2023-07-21 00:14:18,675 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689898458392.d1876148242f8ec205d85e23fec8fa9a. 2023-07-21 00:14:18,675 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689898458392.d1876148242f8ec205d85e23fec8fa9a. after waiting 0 ms 2023-07-21 00:14:18,675 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689898458392.d1876148242f8ec205d85e23fec8fa9a. 2023-07-21 00:14:18,675 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689898458392.d1876148242f8ec205d85e23fec8fa9a. 2023-07-21 00:14:18,675 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for d1876148242f8ec205d85e23fec8fa9a: 2023-07-21 00:14:18,680 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689898458391.34263a8c75ee50d30ee24cdf5ec58cac.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689898458680"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898458680"}]},"ts":"1689898458680"} 2023-07-21 00:14:18,680 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689898458391.fc9446dc0a64f3b03140c19b8ca53334.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689898458680"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898458680"}]},"ts":"1689898458680"} 2023-07-21 00:14:18,680 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689898458391.782ab6e85c5338f428ea481539b522f7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689898458680"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898458680"}]},"ts":"1689898458680"} 2023-07-21 00:14:18,680 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689898458392.04bd849ead499c40f396b236c2baabbc.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689898458680"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898458680"}]},"ts":"1689898458680"} 2023-07-21 00:14:18,681 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689898458392.d1876148242f8ec205d85e23fec8fa9a.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689898458680"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898458680"}]},"ts":"1689898458680"} 2023-07-21 00:14:18,686 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-21 00:14:18,687 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689898458687"}]},"ts":"1689898458687"} 2023-07-21 00:14:18,689 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-21 00:14:18,695 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 00:14:18,696 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 00:14:18,696 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 00:14:18,696 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 00:14:18,699 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=50, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=34263a8c75ee50d30ee24cdf5ec58cac, ASSIGN}, {pid=51, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=782ab6e85c5338f428ea481539b522f7, ASSIGN}, {pid=52, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fc9446dc0a64f3b03140c19b8ca53334, ASSIGN}, {pid=53, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=04bd849ead499c40f396b236c2baabbc, ASSIGN}, {pid=54, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d1876148242f8ec205d85e23fec8fa9a, ASSIGN}] 2023-07-21 00:14:18,702 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=52, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fc9446dc0a64f3b03140c19b8ca53334, ASSIGN 2023-07-21 00:14:18,702 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=50, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=34263a8c75ee50d30ee24cdf5ec58cac, ASSIGN 2023-07-21 00:14:18,703 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=51, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=782ab6e85c5338f428ea481539b522f7, ASSIGN 2023-07-21 00:14:18,703 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=53, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=04bd849ead499c40f396b236c2baabbc, ASSIGN 2023-07-21 00:14:18,703 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=54, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d1876148242f8ec205d85e23fec8fa9a, ASSIGN 2023-07-21 00:14:18,704 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=50, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=34263a8c75ee50d30ee24cdf5ec58cac, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33545,1689898450890; forceNewPlan=false, retain=false 2023-07-21 00:14:18,704 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=52, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fc9446dc0a64f3b03140c19b8ca53334, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42163,1689898450682; forceNewPlan=false, retain=false 2023-07-21 00:14:18,704 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=51, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=782ab6e85c5338f428ea481539b522f7, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33545,1689898450890; forceNewPlan=false, retain=false 2023-07-21 00:14:18,704 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=53, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=04bd849ead499c40f396b236c2baabbc, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42163,1689898450682; forceNewPlan=false, retain=false 2023-07-21 00:14:18,712 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=54, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d1876148242f8ec205d85e23fec8fa9a, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33545,1689898450890; forceNewPlan=false, retain=false 2023-07-21 00:14:18,854 INFO [jenkins-hbase4:33855] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-21 00:14:18,859 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=d1876148242f8ec205d85e23fec8fa9a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33545,1689898450890 2023-07-21 00:14:18,859 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=fc9446dc0a64f3b03140c19b8ca53334, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42163,1689898450682 2023-07-21 00:14:18,859 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689898458392.d1876148242f8ec205d85e23fec8fa9a.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689898458859"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898458859"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898458859"}]},"ts":"1689898458859"} 2023-07-21 00:14:18,859 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=34263a8c75ee50d30ee24cdf5ec58cac, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33545,1689898450890 2023-07-21 00:14:18,859 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=04bd849ead499c40f396b236c2baabbc, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42163,1689898450682 2023-07-21 00:14:18,859 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689898458391.34263a8c75ee50d30ee24cdf5ec58cac.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689898458859"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898458859"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898458859"}]},"ts":"1689898458859"} 2023-07-21 00:14:18,859 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=782ab6e85c5338f428ea481539b522f7, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33545,1689898450890 2023-07-21 00:14:18,860 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689898458392.04bd849ead499c40f396b236c2baabbc.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689898458859"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898458859"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898458859"}]},"ts":"1689898458859"} 2023-07-21 00:14:18,860 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689898458391.782ab6e85c5338f428ea481539b522f7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689898458859"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898458859"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898458859"}]},"ts":"1689898458859"} 2023-07-21 00:14:18,859 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689898458391.fc9446dc0a64f3b03140c19b8ca53334.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689898458859"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898458859"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898458859"}]},"ts":"1689898458859"} 2023-07-21 00:14:18,862 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=55, ppid=54, state=RUNNABLE; OpenRegionProcedure d1876148242f8ec205d85e23fec8fa9a, server=jenkins-hbase4.apache.org,33545,1689898450890}] 2023-07-21 00:14:18,863 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=56, ppid=50, state=RUNNABLE; OpenRegionProcedure 34263a8c75ee50d30ee24cdf5ec58cac, server=jenkins-hbase4.apache.org,33545,1689898450890}] 2023-07-21 00:14:18,864 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=57, ppid=53, state=RUNNABLE; OpenRegionProcedure 04bd849ead499c40f396b236c2baabbc, server=jenkins-hbase4.apache.org,42163,1689898450682}] 2023-07-21 00:14:18,871 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=58, ppid=52, state=RUNNABLE; OpenRegionProcedure fc9446dc0a64f3b03140c19b8ca53334, server=jenkins-hbase4.apache.org,42163,1689898450682}] 2023-07-21 00:14:18,872 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=59, ppid=51, state=RUNNABLE; OpenRegionProcedure 782ab6e85c5338f428ea481539b522f7, server=jenkins-hbase4.apache.org,33545,1689898450890}] 2023-07-21 00:14:18,955 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-21 00:14:19,024 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689898458391.fc9446dc0a64f3b03140c19b8ca53334. 2023-07-21 00:14:19,024 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689898458391.34263a8c75ee50d30ee24cdf5ec58cac. 2023-07-21 00:14:19,024 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => fc9446dc0a64f3b03140c19b8ca53334, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689898458391.fc9446dc0a64f3b03140c19b8ca53334.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-21 00:14:19,024 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 34263a8c75ee50d30ee24cdf5ec58cac, NAME => 'Group_testTableMoveTruncateAndDrop,,1689898458391.34263a8c75ee50d30ee24cdf5ec58cac.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-21 00:14:19,024 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop fc9446dc0a64f3b03140c19b8ca53334 2023-07-21 00:14:19,025 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 34263a8c75ee50d30ee24cdf5ec58cac 2023-07-21 00:14:19,025 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689898458391.fc9446dc0a64f3b03140c19b8ca53334.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:19,025 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689898458391.34263a8c75ee50d30ee24cdf5ec58cac.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:19,025 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for fc9446dc0a64f3b03140c19b8ca53334 2023-07-21 00:14:19,025 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 34263a8c75ee50d30ee24cdf5ec58cac 2023-07-21 00:14:19,025 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 34263a8c75ee50d30ee24cdf5ec58cac 2023-07-21 00:14:19,025 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for fc9446dc0a64f3b03140c19b8ca53334 2023-07-21 00:14:19,031 INFO [StoreOpener-fc9446dc0a64f3b03140c19b8ca53334-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region fc9446dc0a64f3b03140c19b8ca53334 2023-07-21 00:14:19,032 INFO [StoreOpener-34263a8c75ee50d30ee24cdf5ec58cac-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 34263a8c75ee50d30ee24cdf5ec58cac 2023-07-21 00:14:19,036 DEBUG [StoreOpener-fc9446dc0a64f3b03140c19b8ca53334-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/fc9446dc0a64f3b03140c19b8ca53334/f 2023-07-21 00:14:19,037 DEBUG [StoreOpener-fc9446dc0a64f3b03140c19b8ca53334-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/fc9446dc0a64f3b03140c19b8ca53334/f 2023-07-21 00:14:19,037 INFO [StoreOpener-fc9446dc0a64f3b03140c19b8ca53334-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region fc9446dc0a64f3b03140c19b8ca53334 columnFamilyName f 2023-07-21 00:14:19,037 DEBUG [StoreOpener-34263a8c75ee50d30ee24cdf5ec58cac-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/34263a8c75ee50d30ee24cdf5ec58cac/f 2023-07-21 00:14:19,038 DEBUG [StoreOpener-34263a8c75ee50d30ee24cdf5ec58cac-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/34263a8c75ee50d30ee24cdf5ec58cac/f 2023-07-21 00:14:19,039 INFO [StoreOpener-34263a8c75ee50d30ee24cdf5ec58cac-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 34263a8c75ee50d30ee24cdf5ec58cac columnFamilyName f 2023-07-21 00:14:19,039 INFO [StoreOpener-fc9446dc0a64f3b03140c19b8ca53334-1] regionserver.HStore(310): Store=fc9446dc0a64f3b03140c19b8ca53334/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:19,040 INFO [StoreOpener-34263a8c75ee50d30ee24cdf5ec58cac-1] regionserver.HStore(310): Store=34263a8c75ee50d30ee24cdf5ec58cac/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:19,040 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/fc9446dc0a64f3b03140c19b8ca53334 2023-07-21 00:14:19,041 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/fc9446dc0a64f3b03140c19b8ca53334 2023-07-21 00:14:19,041 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/34263a8c75ee50d30ee24cdf5ec58cac 2023-07-21 00:14:19,041 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/34263a8c75ee50d30ee24cdf5ec58cac 2023-07-21 00:14:19,050 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for fc9446dc0a64f3b03140c19b8ca53334 2023-07-21 00:14:19,050 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 34263a8c75ee50d30ee24cdf5ec58cac 2023-07-21 00:14:19,081 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/fc9446dc0a64f3b03140c19b8ca53334/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 00:14:19,082 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/34263a8c75ee50d30ee24cdf5ec58cac/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 00:14:19,083 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened fc9446dc0a64f3b03140c19b8ca53334; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9439316320, jitterRate=-0.12089516222476959}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 00:14:19,083 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for fc9446dc0a64f3b03140c19b8ca53334: 2023-07-21 00:14:19,083 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 34263a8c75ee50d30ee24cdf5ec58cac; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9787492320, jitterRate=-0.08846874535083771}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 00:14:19,083 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 34263a8c75ee50d30ee24cdf5ec58cac: 2023-07-21 00:14:19,089 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689898458391.fc9446dc0a64f3b03140c19b8ca53334., pid=58, masterSystemTime=1689898459018 2023-07-21 00:14:19,089 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689898458391.34263a8c75ee50d30ee24cdf5ec58cac., pid=56, masterSystemTime=1689898459017 2023-07-21 00:14:19,103 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689898458391.fc9446dc0a64f3b03140c19b8ca53334. 2023-07-21 00:14:19,103 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689898458391.fc9446dc0a64f3b03140c19b8ca53334. 2023-07-21 00:14:19,104 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689898458392.04bd849ead499c40f396b236c2baabbc. 2023-07-21 00:14:19,104 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 04bd849ead499c40f396b236c2baabbc, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689898458392.04bd849ead499c40f396b236c2baabbc.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-21 00:14:19,104 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 04bd849ead499c40f396b236c2baabbc 2023-07-21 00:14:19,104 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689898458392.04bd849ead499c40f396b236c2baabbc.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:19,105 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 04bd849ead499c40f396b236c2baabbc 2023-07-21 00:14:19,105 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 04bd849ead499c40f396b236c2baabbc 2023-07-21 00:14:19,105 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=fc9446dc0a64f3b03140c19b8ca53334, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42163,1689898450682 2023-07-21 00:14:19,106 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689898458391.fc9446dc0a64f3b03140c19b8ca53334.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689898459105"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689898459105"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689898459105"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689898459105"}]},"ts":"1689898459105"} 2023-07-21 00:14:19,106 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689898458391.34263a8c75ee50d30ee24cdf5ec58cac. 2023-07-21 00:14:19,106 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689898458391.34263a8c75ee50d30ee24cdf5ec58cac. 2023-07-21 00:14:19,106 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689898458391.782ab6e85c5338f428ea481539b522f7. 2023-07-21 00:14:19,106 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 782ab6e85c5338f428ea481539b522f7, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689898458391.782ab6e85c5338f428ea481539b522f7.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-21 00:14:19,106 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=34263a8c75ee50d30ee24cdf5ec58cac, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33545,1689898450890 2023-07-21 00:14:19,107 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689898458391.34263a8c75ee50d30ee24cdf5ec58cac.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689898459106"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689898459106"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689898459106"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689898459106"}]},"ts":"1689898459106"} 2023-07-21 00:14:19,107 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 782ab6e85c5338f428ea481539b522f7 2023-07-21 00:14:19,107 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689898458391.782ab6e85c5338f428ea481539b522f7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:19,107 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 782ab6e85c5338f428ea481539b522f7 2023-07-21 00:14:19,107 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 782ab6e85c5338f428ea481539b522f7 2023-07-21 00:14:19,111 INFO [StoreOpener-04bd849ead499c40f396b236c2baabbc-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 04bd849ead499c40f396b236c2baabbc 2023-07-21 00:14:19,115 INFO [StoreOpener-782ab6e85c5338f428ea481539b522f7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 782ab6e85c5338f428ea481539b522f7 2023-07-21 00:14:19,115 DEBUG [StoreOpener-04bd849ead499c40f396b236c2baabbc-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/04bd849ead499c40f396b236c2baabbc/f 2023-07-21 00:14:19,116 DEBUG [StoreOpener-04bd849ead499c40f396b236c2baabbc-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/04bd849ead499c40f396b236c2baabbc/f 2023-07-21 00:14:19,118 INFO [StoreOpener-04bd849ead499c40f396b236c2baabbc-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 04bd849ead499c40f396b236c2baabbc columnFamilyName f 2023-07-21 00:14:19,119 INFO [StoreOpener-04bd849ead499c40f396b236c2baabbc-1] regionserver.HStore(310): Store=04bd849ead499c40f396b236c2baabbc/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:19,120 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/04bd849ead499c40f396b236c2baabbc 2023-07-21 00:14:19,121 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/04bd849ead499c40f396b236c2baabbc 2023-07-21 00:14:19,123 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=56, resume processing ppid=50 2023-07-21 00:14:19,123 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=50, state=SUCCESS; OpenRegionProcedure 34263a8c75ee50d30ee24cdf5ec58cac, server=jenkins-hbase4.apache.org,33545,1689898450890 in 252 msec 2023-07-21 00:14:19,123 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=58, resume processing ppid=52 2023-07-21 00:14:19,123 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=58, ppid=52, state=SUCCESS; OpenRegionProcedure fc9446dc0a64f3b03140c19b8ca53334, server=jenkins-hbase4.apache.org,42163,1689898450682 in 244 msec 2023-07-21 00:14:19,129 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=52, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fc9446dc0a64f3b03140c19b8ca53334, ASSIGN in 424 msec 2023-07-21 00:14:19,130 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=34263a8c75ee50d30ee24cdf5ec58cac, ASSIGN in 427 msec 2023-07-21 00:14:19,130 DEBUG [StoreOpener-782ab6e85c5338f428ea481539b522f7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/782ab6e85c5338f428ea481539b522f7/f 2023-07-21 00:14:19,130 DEBUG [StoreOpener-782ab6e85c5338f428ea481539b522f7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/782ab6e85c5338f428ea481539b522f7/f 2023-07-21 00:14:19,131 INFO [StoreOpener-782ab6e85c5338f428ea481539b522f7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 782ab6e85c5338f428ea481539b522f7 columnFamilyName f 2023-07-21 00:14:19,132 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 04bd849ead499c40f396b236c2baabbc 2023-07-21 00:14:19,132 INFO [StoreOpener-782ab6e85c5338f428ea481539b522f7-1] regionserver.HStore(310): Store=782ab6e85c5338f428ea481539b522f7/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:19,134 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/782ab6e85c5338f428ea481539b522f7 2023-07-21 00:14:19,134 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/782ab6e85c5338f428ea481539b522f7 2023-07-21 00:14:19,150 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/04bd849ead499c40f396b236c2baabbc/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 00:14:19,150 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 782ab6e85c5338f428ea481539b522f7 2023-07-21 00:14:19,151 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 04bd849ead499c40f396b236c2baabbc; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10771569600, jitterRate=0.003180593252182007}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 00:14:19,151 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 04bd849ead499c40f396b236c2baabbc: 2023-07-21 00:14:19,154 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689898458392.04bd849ead499c40f396b236c2baabbc., pid=57, masterSystemTime=1689898459018 2023-07-21 00:14:19,158 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689898458392.04bd849ead499c40f396b236c2baabbc. 2023-07-21 00:14:19,158 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689898458392.04bd849ead499c40f396b236c2baabbc. 2023-07-21 00:14:19,159 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=04bd849ead499c40f396b236c2baabbc, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42163,1689898450682 2023-07-21 00:14:19,159 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689898458392.04bd849ead499c40f396b236c2baabbc.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689898459158"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689898459158"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689898459158"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689898459158"}]},"ts":"1689898459158"} 2023-07-21 00:14:19,165 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/782ab6e85c5338f428ea481539b522f7/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 00:14:19,165 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=57, resume processing ppid=53 2023-07-21 00:14:19,165 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=53, state=SUCCESS; OpenRegionProcedure 04bd849ead499c40f396b236c2baabbc, server=jenkins-hbase4.apache.org,42163,1689898450682 in 298 msec 2023-07-21 00:14:19,166 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 782ab6e85c5338f428ea481539b522f7; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10830773600, jitterRate=0.008694395422935486}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 00:14:19,166 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 782ab6e85c5338f428ea481539b522f7: 2023-07-21 00:14:19,167 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689898458391.782ab6e85c5338f428ea481539b522f7., pid=59, masterSystemTime=1689898459017 2023-07-21 00:14:19,172 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689898458391.782ab6e85c5338f428ea481539b522f7. 2023-07-21 00:14:19,172 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689898458391.782ab6e85c5338f428ea481539b522f7. 2023-07-21 00:14:19,172 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689898458392.d1876148242f8ec205d85e23fec8fa9a. 2023-07-21 00:14:19,172 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d1876148242f8ec205d85e23fec8fa9a, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689898458392.d1876148242f8ec205d85e23fec8fa9a.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-21 00:14:19,172 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop d1876148242f8ec205d85e23fec8fa9a 2023-07-21 00:14:19,172 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689898458392.d1876148242f8ec205d85e23fec8fa9a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:19,173 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d1876148242f8ec205d85e23fec8fa9a 2023-07-21 00:14:19,173 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d1876148242f8ec205d85e23fec8fa9a 2023-07-21 00:14:19,173 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=04bd849ead499c40f396b236c2baabbc, ASSIGN in 467 msec 2023-07-21 00:14:19,173 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=782ab6e85c5338f428ea481539b522f7, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33545,1689898450890 2023-07-21 00:14:19,174 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689898458391.782ab6e85c5338f428ea481539b522f7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689898459173"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689898459173"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689898459173"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689898459173"}]},"ts":"1689898459173"} 2023-07-21 00:14:19,176 INFO [StoreOpener-d1876148242f8ec205d85e23fec8fa9a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region d1876148242f8ec205d85e23fec8fa9a 2023-07-21 00:14:19,180 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=59, resume processing ppid=51 2023-07-21 00:14:19,180 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=51, state=SUCCESS; OpenRegionProcedure 782ab6e85c5338f428ea481539b522f7, server=jenkins-hbase4.apache.org,33545,1689898450890 in 304 msec 2023-07-21 00:14:19,183 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=782ab6e85c5338f428ea481539b522f7, ASSIGN in 485 msec 2023-07-21 00:14:19,183 DEBUG [StoreOpener-d1876148242f8ec205d85e23fec8fa9a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/d1876148242f8ec205d85e23fec8fa9a/f 2023-07-21 00:14:19,184 DEBUG [StoreOpener-d1876148242f8ec205d85e23fec8fa9a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/d1876148242f8ec205d85e23fec8fa9a/f 2023-07-21 00:14:19,184 INFO [StoreOpener-d1876148242f8ec205d85e23fec8fa9a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d1876148242f8ec205d85e23fec8fa9a columnFamilyName f 2023-07-21 00:14:19,185 INFO [StoreOpener-d1876148242f8ec205d85e23fec8fa9a-1] regionserver.HStore(310): Store=d1876148242f8ec205d85e23fec8fa9a/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:19,186 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/d1876148242f8ec205d85e23fec8fa9a 2023-07-21 00:14:19,187 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/d1876148242f8ec205d85e23fec8fa9a 2023-07-21 00:14:19,191 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d1876148242f8ec205d85e23fec8fa9a 2023-07-21 00:14:19,195 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/d1876148242f8ec205d85e23fec8fa9a/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 00:14:19,195 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d1876148242f8ec205d85e23fec8fa9a; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9856100640, jitterRate=-0.08207909762859344}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 00:14:19,195 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d1876148242f8ec205d85e23fec8fa9a: 2023-07-21 00:14:19,197 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689898458392.d1876148242f8ec205d85e23fec8fa9a., pid=55, masterSystemTime=1689898459017 2023-07-21 00:14:19,199 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689898458392.d1876148242f8ec205d85e23fec8fa9a. 2023-07-21 00:14:19,199 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689898458392.d1876148242f8ec205d85e23fec8fa9a. 2023-07-21 00:14:19,200 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=d1876148242f8ec205d85e23fec8fa9a, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33545,1689898450890 2023-07-21 00:14:19,200 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689898458392.d1876148242f8ec205d85e23fec8fa9a.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689898459200"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689898459200"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689898459200"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689898459200"}]},"ts":"1689898459200"} 2023-07-21 00:14:19,211 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=55, resume processing ppid=54 2023-07-21 00:14:19,211 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=55, ppid=54, state=SUCCESS; OpenRegionProcedure d1876148242f8ec205d85e23fec8fa9a, server=jenkins-hbase4.apache.org,33545,1689898450890 in 341 msec 2023-07-21 00:14:19,213 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=54, resume processing ppid=49 2023-07-21 00:14:19,213 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d1876148242f8ec205d85e23fec8fa9a, ASSIGN in 512 msec 2023-07-21 00:14:19,214 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689898459213"}]},"ts":"1689898459213"} 2023-07-21 00:14:19,216 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-21 00:14:19,218 DEBUG [PEWorker-4] procedure.TruncateTableProcedure(145): truncate 'Group_testTableMoveTruncateAndDrop' completed 2023-07-21 00:14:19,220 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=49, state=SUCCESS; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) in 883 msec 2023-07-21 00:14:19,266 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-21 00:14:19,353 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-21 00:14:19,354 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-21 00:14:19,355 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 00:14:19,355 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-21 00:14:19,355 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-21 00:14:19,355 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-21 00:14:19,358 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-21 00:14:19,359 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-21 00:14:19,359 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-21 00:14:19,360 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testTableMoveTruncateAndDrop' 2023-07-21 00:14:19,457 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-21 00:14:19,458 INFO [Listener at localhost/41495] client.HBaseAdmin$TableFuture(3541): Operation: TRUNCATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 49 completed 2023-07-21 00:14:19,459 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_761444744 2023-07-21 00:14:19,459 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:19,462 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_761444744 2023-07-21 00:14:19,462 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:19,463 INFO [Listener at localhost/41495] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-21 00:14:19,464 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-21 00:14:19,465 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] procedure2.ProcedureExecutor(1029): Stored pid=60, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 00:14:19,474 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=60 2023-07-21 00:14:19,476 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689898459476"}]},"ts":"1689898459476"} 2023-07-21 00:14:19,478 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-21 00:14:19,482 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-21 00:14:19,483 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=61, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=34263a8c75ee50d30ee24cdf5ec58cac, UNASSIGN}, {pid=62, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=782ab6e85c5338f428ea481539b522f7, UNASSIGN}, {pid=63, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fc9446dc0a64f3b03140c19b8ca53334, UNASSIGN}, {pid=64, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=04bd849ead499c40f396b236c2baabbc, UNASSIGN}, {pid=65, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d1876148242f8ec205d85e23fec8fa9a, UNASSIGN}] 2023-07-21 00:14:19,492 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=64, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=04bd849ead499c40f396b236c2baabbc, UNASSIGN 2023-07-21 00:14:19,493 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=62, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=782ab6e85c5338f428ea481539b522f7, UNASSIGN 2023-07-21 00:14:19,493 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=65, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d1876148242f8ec205d85e23fec8fa9a, UNASSIGN 2023-07-21 00:14:19,494 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=63, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fc9446dc0a64f3b03140c19b8ca53334, UNASSIGN 2023-07-21 00:14:19,494 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=61, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=34263a8c75ee50d30ee24cdf5ec58cac, UNASSIGN 2023-07-21 00:14:19,495 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=62 updating hbase:meta row=782ab6e85c5338f428ea481539b522f7, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33545,1689898450890 2023-07-21 00:14:19,495 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689898458391.782ab6e85c5338f428ea481539b522f7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689898459494"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898459494"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898459494"}]},"ts":"1689898459494"} 2023-07-21 00:14:19,495 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=04bd849ead499c40f396b236c2baabbc, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42163,1689898450682 2023-07-21 00:14:19,495 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689898458392.04bd849ead499c40f396b236c2baabbc.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689898459495"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898459495"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898459495"}]},"ts":"1689898459495"} 2023-07-21 00:14:19,496 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=d1876148242f8ec205d85e23fec8fa9a, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33545,1689898450890 2023-07-21 00:14:19,496 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=61 updating hbase:meta row=34263a8c75ee50d30ee24cdf5ec58cac, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33545,1689898450890 2023-07-21 00:14:19,496 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689898458392.d1876148242f8ec205d85e23fec8fa9a.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689898459496"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898459496"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898459496"}]},"ts":"1689898459496"} 2023-07-21 00:14:19,496 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689898458391.34263a8c75ee50d30ee24cdf5ec58cac.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689898459496"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898459496"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898459496"}]},"ts":"1689898459496"} 2023-07-21 00:14:19,496 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=63 updating hbase:meta row=fc9446dc0a64f3b03140c19b8ca53334, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42163,1689898450682 2023-07-21 00:14:19,496 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689898458391.fc9446dc0a64f3b03140c19b8ca53334.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689898459496"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898459496"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898459496"}]},"ts":"1689898459496"} 2023-07-21 00:14:19,497 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=66, ppid=62, state=RUNNABLE; CloseRegionProcedure 782ab6e85c5338f428ea481539b522f7, server=jenkins-hbase4.apache.org,33545,1689898450890}] 2023-07-21 00:14:19,500 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=67, ppid=64, state=RUNNABLE; CloseRegionProcedure 04bd849ead499c40f396b236c2baabbc, server=jenkins-hbase4.apache.org,42163,1689898450682}] 2023-07-21 00:14:19,501 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=68, ppid=65, state=RUNNABLE; CloseRegionProcedure d1876148242f8ec205d85e23fec8fa9a, server=jenkins-hbase4.apache.org,33545,1689898450890}] 2023-07-21 00:14:19,502 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=69, ppid=61, state=RUNNABLE; CloseRegionProcedure 34263a8c75ee50d30ee24cdf5ec58cac, server=jenkins-hbase4.apache.org,33545,1689898450890}] 2023-07-21 00:14:19,506 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=70, ppid=63, state=RUNNABLE; CloseRegionProcedure fc9446dc0a64f3b03140c19b8ca53334, server=jenkins-hbase4.apache.org,42163,1689898450682}] 2023-07-21 00:14:19,577 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=60 2023-07-21 00:14:19,655 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 04bd849ead499c40f396b236c2baabbc 2023-07-21 00:14:19,655 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 782ab6e85c5338f428ea481539b522f7 2023-07-21 00:14:19,657 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 04bd849ead499c40f396b236c2baabbc, disabling compactions & flushes 2023-07-21 00:14:19,657 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 782ab6e85c5338f428ea481539b522f7, disabling compactions & flushes 2023-07-21 00:14:19,658 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689898458392.04bd849ead499c40f396b236c2baabbc. 2023-07-21 00:14:19,658 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689898458391.782ab6e85c5338f428ea481539b522f7. 2023-07-21 00:14:19,658 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689898458392.04bd849ead499c40f396b236c2baabbc. 2023-07-21 00:14:19,658 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689898458391.782ab6e85c5338f428ea481539b522f7. 2023-07-21 00:14:19,658 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689898458392.04bd849ead499c40f396b236c2baabbc. after waiting 0 ms 2023-07-21 00:14:19,658 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689898458391.782ab6e85c5338f428ea481539b522f7. after waiting 0 ms 2023-07-21 00:14:19,658 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689898458392.04bd849ead499c40f396b236c2baabbc. 2023-07-21 00:14:19,658 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689898458391.782ab6e85c5338f428ea481539b522f7. 2023-07-21 00:14:19,664 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/782ab6e85c5338f428ea481539b522f7/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 00:14:19,664 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/04bd849ead499c40f396b236c2baabbc/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 00:14:19,665 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689898458391.782ab6e85c5338f428ea481539b522f7. 2023-07-21 00:14:19,666 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 782ab6e85c5338f428ea481539b522f7: 2023-07-21 00:14:19,667 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689898458392.04bd849ead499c40f396b236c2baabbc. 2023-07-21 00:14:19,667 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 04bd849ead499c40f396b236c2baabbc: 2023-07-21 00:14:19,668 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 782ab6e85c5338f428ea481539b522f7 2023-07-21 00:14:19,669 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close d1876148242f8ec205d85e23fec8fa9a 2023-07-21 00:14:19,669 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d1876148242f8ec205d85e23fec8fa9a, disabling compactions & flushes 2023-07-21 00:14:19,670 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689898458392.d1876148242f8ec205d85e23fec8fa9a. 2023-07-21 00:14:19,670 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689898458392.d1876148242f8ec205d85e23fec8fa9a. 2023-07-21 00:14:19,670 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689898458392.d1876148242f8ec205d85e23fec8fa9a. after waiting 0 ms 2023-07-21 00:14:19,670 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689898458392.d1876148242f8ec205d85e23fec8fa9a. 2023-07-21 00:14:19,685 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=62 updating hbase:meta row=782ab6e85c5338f428ea481539b522f7, regionState=CLOSED 2023-07-21 00:14:19,686 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689898458391.782ab6e85c5338f428ea481539b522f7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689898459685"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898459685"}]},"ts":"1689898459685"} 2023-07-21 00:14:19,686 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 04bd849ead499c40f396b236c2baabbc 2023-07-21 00:14:19,686 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close fc9446dc0a64f3b03140c19b8ca53334 2023-07-21 00:14:19,689 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing fc9446dc0a64f3b03140c19b8ca53334, disabling compactions & flushes 2023-07-21 00:14:19,690 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689898458391.fc9446dc0a64f3b03140c19b8ca53334. 2023-07-21 00:14:19,690 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689898458391.fc9446dc0a64f3b03140c19b8ca53334. 2023-07-21 00:14:19,690 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689898458391.fc9446dc0a64f3b03140c19b8ca53334. after waiting 0 ms 2023-07-21 00:14:19,690 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689898458391.fc9446dc0a64f3b03140c19b8ca53334. 2023-07-21 00:14:19,690 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=04bd849ead499c40f396b236c2baabbc, regionState=CLOSED 2023-07-21 00:14:19,690 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689898458392.04bd849ead499c40f396b236c2baabbc.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689898459690"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898459690"}]},"ts":"1689898459690"} 2023-07-21 00:14:19,691 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/d1876148242f8ec205d85e23fec8fa9a/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 00:14:19,693 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689898458392.d1876148242f8ec205d85e23fec8fa9a. 2023-07-21 00:14:19,693 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d1876148242f8ec205d85e23fec8fa9a: 2023-07-21 00:14:19,698 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed d1876148242f8ec205d85e23fec8fa9a 2023-07-21 00:14:19,698 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 34263a8c75ee50d30ee24cdf5ec58cac 2023-07-21 00:14:19,699 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=66, resume processing ppid=62 2023-07-21 00:14:19,699 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=66, ppid=62, state=SUCCESS; CloseRegionProcedure 782ab6e85c5338f428ea481539b522f7, server=jenkins-hbase4.apache.org,33545,1689898450890 in 193 msec 2023-07-21 00:14:19,699 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 34263a8c75ee50d30ee24cdf5ec58cac, disabling compactions & flushes 2023-07-21 00:14:19,699 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689898458391.34263a8c75ee50d30ee24cdf5ec58cac. 2023-07-21 00:14:19,699 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689898458391.34263a8c75ee50d30ee24cdf5ec58cac. 2023-07-21 00:14:19,699 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689898458391.34263a8c75ee50d30ee24cdf5ec58cac. after waiting 0 ms 2023-07-21 00:14:19,700 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689898458391.34263a8c75ee50d30ee24cdf5ec58cac. 2023-07-21 00:14:19,701 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=d1876148242f8ec205d85e23fec8fa9a, regionState=CLOSED 2023-07-21 00:14:19,701 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689898458392.d1876148242f8ec205d85e23fec8fa9a.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689898459701"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898459701"}]},"ts":"1689898459701"} 2023-07-21 00:14:19,708 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/fc9446dc0a64f3b03140c19b8ca53334/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 00:14:19,708 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=67, resume processing ppid=64 2023-07-21 00:14:19,708 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=782ab6e85c5338f428ea481539b522f7, UNASSIGN in 216 msec 2023-07-21 00:14:19,708 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=67, ppid=64, state=SUCCESS; CloseRegionProcedure 04bd849ead499c40f396b236c2baabbc, server=jenkins-hbase4.apache.org,42163,1689898450682 in 197 msec 2023-07-21 00:14:19,711 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=04bd849ead499c40f396b236c2baabbc, UNASSIGN in 225 msec 2023-07-21 00:14:19,714 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=68, resume processing ppid=65 2023-07-21 00:14:19,714 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=68, ppid=65, state=SUCCESS; CloseRegionProcedure d1876148242f8ec205d85e23fec8fa9a, server=jenkins-hbase4.apache.org,33545,1689898450890 in 208 msec 2023-07-21 00:14:19,716 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689898458391.fc9446dc0a64f3b03140c19b8ca53334. 2023-07-21 00:14:19,716 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for fc9446dc0a64f3b03140c19b8ca53334: 2023-07-21 00:14:19,720 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d1876148242f8ec205d85e23fec8fa9a, UNASSIGN in 231 msec 2023-07-21 00:14:19,720 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testTableMoveTruncateAndDrop/34263a8c75ee50d30ee24cdf5ec58cac/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 00:14:19,723 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689898458391.34263a8c75ee50d30ee24cdf5ec58cac. 2023-07-21 00:14:19,723 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed fc9446dc0a64f3b03140c19b8ca53334 2023-07-21 00:14:19,723 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 34263a8c75ee50d30ee24cdf5ec58cac: 2023-07-21 00:14:19,724 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=63 updating hbase:meta row=fc9446dc0a64f3b03140c19b8ca53334, regionState=CLOSED 2023-07-21 00:14:19,724 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689898458391.fc9446dc0a64f3b03140c19b8ca53334.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689898459724"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898459724"}]},"ts":"1689898459724"} 2023-07-21 00:14:19,727 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 34263a8c75ee50d30ee24cdf5ec58cac 2023-07-21 00:14:19,728 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=61 updating hbase:meta row=34263a8c75ee50d30ee24cdf5ec58cac, regionState=CLOSED 2023-07-21 00:14:19,728 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689898458391.34263a8c75ee50d30ee24cdf5ec58cac.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689898459728"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898459728"}]},"ts":"1689898459728"} 2023-07-21 00:14:19,731 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=70, resume processing ppid=63 2023-07-21 00:14:19,731 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=70, ppid=63, state=SUCCESS; CloseRegionProcedure fc9446dc0a64f3b03140c19b8ca53334, server=jenkins-hbase4.apache.org,42163,1689898450682 in 221 msec 2023-07-21 00:14:19,734 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=63, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fc9446dc0a64f3b03140c19b8ca53334, UNASSIGN in 248 msec 2023-07-21 00:14:19,734 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=69, resume processing ppid=61 2023-07-21 00:14:19,734 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=69, ppid=61, state=SUCCESS; CloseRegionProcedure 34263a8c75ee50d30ee24cdf5ec58cac, server=jenkins-hbase4.apache.org,33545,1689898450890 in 229 msec 2023-07-21 00:14:19,739 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=61, resume processing ppid=60 2023-07-21 00:14:19,739 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=34263a8c75ee50d30ee24cdf5ec58cac, UNASSIGN in 251 msec 2023-07-21 00:14:19,741 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689898459741"}]},"ts":"1689898459741"} 2023-07-21 00:14:19,748 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-21 00:14:19,752 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-21 00:14:19,762 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=60, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 290 msec 2023-07-21 00:14:19,780 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=60 2023-07-21 00:14:19,780 INFO [Listener at localhost/41495] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 60 completed 2023-07-21 00:14:19,788 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testTableMoveTruncateAndDrop 2023-07-21 00:14:19,799 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] procedure2.ProcedureExecutor(1029): Stored pid=71, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 00:14:19,801 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=71, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 00:14:19,802 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testTableMoveTruncateAndDrop' from rsgroup 'Group_testTableMoveTruncateAndDrop_761444744' 2023-07-21 00:14:19,803 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=71, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 00:14:19,805 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_761444744 2023-07-21 00:14:19,809 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:19,810 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:19,810 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 00:14:19,819 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/34263a8c75ee50d30ee24cdf5ec58cac 2023-07-21 00:14:19,819 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/04bd849ead499c40f396b236c2baabbc 2023-07-21 00:14:19,819 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d1876148242f8ec205d85e23fec8fa9a 2023-07-21 00:14:19,819 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fc9446dc0a64f3b03140c19b8ca53334 2023-07-21 00:14:19,819 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/782ab6e85c5338f428ea481539b522f7 2023-07-21 00:14:19,822 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=71 2023-07-21 00:14:19,824 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/04bd849ead499c40f396b236c2baabbc/f, FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/04bd849ead499c40f396b236c2baabbc/recovered.edits] 2023-07-21 00:14:19,825 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/34263a8c75ee50d30ee24cdf5ec58cac/f, FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/34263a8c75ee50d30ee24cdf5ec58cac/recovered.edits] 2023-07-21 00:14:19,825 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d1876148242f8ec205d85e23fec8fa9a/f, FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d1876148242f8ec205d85e23fec8fa9a/recovered.edits] 2023-07-21 00:14:19,826 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fc9446dc0a64f3b03140c19b8ca53334/f, FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fc9446dc0a64f3b03140c19b8ca53334/recovered.edits] 2023-07-21 00:14:19,826 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/782ab6e85c5338f428ea481539b522f7/f, FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/782ab6e85c5338f428ea481539b522f7/recovered.edits] 2023-07-21 00:14:19,838 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/782ab6e85c5338f428ea481539b522f7/recovered.edits/4.seqid to hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/archive/data/default/Group_testTableMoveTruncateAndDrop/782ab6e85c5338f428ea481539b522f7/recovered.edits/4.seqid 2023-07-21 00:14:19,839 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/04bd849ead499c40f396b236c2baabbc/recovered.edits/4.seqid to hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/archive/data/default/Group_testTableMoveTruncateAndDrop/04bd849ead499c40f396b236c2baabbc/recovered.edits/4.seqid 2023-07-21 00:14:19,839 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/782ab6e85c5338f428ea481539b522f7 2023-07-21 00:14:19,840 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d1876148242f8ec205d85e23fec8fa9a/recovered.edits/4.seqid to hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/archive/data/default/Group_testTableMoveTruncateAndDrop/d1876148242f8ec205d85e23fec8fa9a/recovered.edits/4.seqid 2023-07-21 00:14:19,840 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/04bd849ead499c40f396b236c2baabbc 2023-07-21 00:14:19,840 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d1876148242f8ec205d85e23fec8fa9a 2023-07-21 00:14:19,840 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fc9446dc0a64f3b03140c19b8ca53334/recovered.edits/4.seqid to hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/archive/data/default/Group_testTableMoveTruncateAndDrop/fc9446dc0a64f3b03140c19b8ca53334/recovered.edits/4.seqid 2023-07-21 00:14:19,841 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/34263a8c75ee50d30ee24cdf5ec58cac/recovered.edits/4.seqid to hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/archive/data/default/Group_testTableMoveTruncateAndDrop/34263a8c75ee50d30ee24cdf5ec58cac/recovered.edits/4.seqid 2023-07-21 00:14:19,841 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fc9446dc0a64f3b03140c19b8ca53334 2023-07-21 00:14:19,842 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testTableMoveTruncateAndDrop/34263a8c75ee50d30ee24cdf5ec58cac 2023-07-21 00:14:19,842 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-21 00:14:19,845 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=71, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 00:14:19,859 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-21 00:14:19,862 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-21 00:14:19,865 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=71, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 00:14:19,865 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-21 00:14:19,865 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689898458391.34263a8c75ee50d30ee24cdf5ec58cac.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689898459865"}]},"ts":"9223372036854775807"} 2023-07-21 00:14:19,865 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689898458391.782ab6e85c5338f428ea481539b522f7.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689898459865"}]},"ts":"9223372036854775807"} 2023-07-21 00:14:19,865 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689898458391.fc9446dc0a64f3b03140c19b8ca53334.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689898459865"}]},"ts":"9223372036854775807"} 2023-07-21 00:14:19,865 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689898458392.04bd849ead499c40f396b236c2baabbc.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689898459865"}]},"ts":"9223372036854775807"} 2023-07-21 00:14:19,865 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689898458392.d1876148242f8ec205d85e23fec8fa9a.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689898459865"}]},"ts":"9223372036854775807"} 2023-07-21 00:14:19,871 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-21 00:14:19,872 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 34263a8c75ee50d30ee24cdf5ec58cac, NAME => 'Group_testTableMoveTruncateAndDrop,,1689898458391.34263a8c75ee50d30ee24cdf5ec58cac.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 782ab6e85c5338f428ea481539b522f7, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689898458391.782ab6e85c5338f428ea481539b522f7.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => fc9446dc0a64f3b03140c19b8ca53334, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689898458391.fc9446dc0a64f3b03140c19b8ca53334.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 04bd849ead499c40f396b236c2baabbc, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689898458392.04bd849ead499c40f396b236c2baabbc.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => d1876148242f8ec205d85e23fec8fa9a, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689898458392.d1876148242f8ec205d85e23fec8fa9a.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-21 00:14:19,872 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-21 00:14:19,872 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689898459872"}]},"ts":"9223372036854775807"} 2023-07-21 00:14:19,874 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-21 00:14:19,876 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=71, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-21 00:14:19,878 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=71, state=SUCCESS; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop in 86 msec 2023-07-21 00:14:19,923 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=71 2023-07-21 00:14:19,924 INFO [Listener at localhost/41495] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 71 completed 2023-07-21 00:14:19,925 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_761444744 2023-07-21 00:14:19,925 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:19,931 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:19,932 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:19,934 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 00:14:19,934 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 00:14:19,934 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 00:14:19,936 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42163, jenkins-hbase4.apache.org:33545] to rsgroup default 2023-07-21 00:14:19,940 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_761444744 2023-07-21 00:14:19,940 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:19,942 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:19,944 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 00:14:19,947 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testTableMoveTruncateAndDrop_761444744, current retry=0 2023-07-21 00:14:19,947 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33545,1689898450890, jenkins-hbase4.apache.org,42163,1689898450682] are moved back to Group_testTableMoveTruncateAndDrop_761444744 2023-07-21 00:14:19,947 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testTableMoveTruncateAndDrop_761444744 => default 2023-07-21 00:14:19,947 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 00:14:19,955 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testTableMoveTruncateAndDrop_761444744 2023-07-21 00:14:19,961 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:19,962 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:19,962 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 00:14:19,970 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 00:14:19,972 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 00:14:19,972 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 00:14:19,972 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 00:14:19,973 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 00:14:19,973 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 00:14:19,975 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 00:14:19,983 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:19,983 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 00:14:19,985 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 00:14:19,991 INFO [Listener at localhost/41495] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 00:14:19,993 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 00:14:19,996 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:19,998 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:20,000 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 00:14:20,002 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 00:14:20,006 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:20,006 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:20,011 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33855] to rsgroup master 2023-07-21 00:14:20,011 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 00:14:20,011 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.CallRunner(144): callId: 146 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:44654 deadline: 1689899660011, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. 2023-07-21 00:14:20,012 WARN [Listener at localhost/41495] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 00:14:20,015 INFO [Listener at localhost/41495] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:20,016 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:20,016 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:20,017 INFO [Listener at localhost/41495] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33545, jenkins-hbase4.apache.org:42163, jenkins-hbase4.apache.org:43987, jenkins-hbase4.apache.org:46101], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 00:14:20,018 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 00:14:20,018 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:20,059 INFO [Listener at localhost/41495] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=493 (was 419) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43987 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x27b82929-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_18598075_17 at /127.0.0.1:47906 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:43987 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x27b82929-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1997362576_17 at /127.0.0.1:59010 [Receiving block BP-668481516-172.31.14.131-1689898444457:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp532866249-638 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-64f1e167-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1493833170) connection to localhost/127.0.0.1:36751 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=43987 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336-prefix:jenkins-hbase4.apache.org,43987,1689898455241 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:43987Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=43987 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp532866249-635-acceptor-0@37135056-ServerConnector@1d8aa3aa{HTTP/1.1, (http/1.1)}{0.0.0.0:45519} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp532866249-641 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=43987 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43987 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x27b82929-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-668481516-172.31.14.131-1689898444457:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x27b82929-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5a6bf6db-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=43987 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-3-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43987 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1085622103_17 at /127.0.0.1:59078 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:36751 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RSProcedureDispatcher-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:43987-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp532866249-637 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp532866249-634 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/236525163.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x27b82929-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1997362576_17 at /127.0.0.1:42726 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x27b82929-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60276@0x1ebd84fa sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1513934835.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp532866249-640 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp532866249-636 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-4-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=43987 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=43987 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60276@0x1ebd84fa-SendThread(127.0.0.1:60276) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-668481516-172.31.14.131-1689898444457:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1997362576_17 at /127.0.0.1:42858 [Receiving block BP-668481516-172.31.14.131-1689898444457:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-4-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1997362576_17 at /127.0.0.1:47908 [Receiving block BP-668481516-172.31.14.131-1689898444457:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp532866249-639 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43987 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-668481516-172.31.14.131-1689898444457:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60276@0x1ebd84fa-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: hconnection-0x5a6bf6db-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=771 (was 689) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=523 (was 480) - SystemLoadAverage LEAK? -, ProcessCount=177 (was 177), AvailableMemoryMB=3016 (was 3245) 2023-07-21 00:14:20,082 INFO [Listener at localhost/41495] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=493, OpenFileDescriptor=771, MaxFileDescriptor=60000, SystemLoadAverage=523, ProcessCount=177, AvailableMemoryMB=3015 2023-07-21 00:14:20,082 INFO [Listener at localhost/41495] rsgroup.TestRSGroupsBase(132): testValidGroupNames 2023-07-21 00:14:20,089 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:20,089 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:20,091 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 00:14:20,091 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 00:14:20,091 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 00:14:20,092 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 00:14:20,092 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 00:14:20,094 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 00:14:20,098 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:20,099 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 00:14:20,102 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 00:14:20,106 INFO [Listener at localhost/41495] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 00:14:20,107 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 00:14:20,111 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:20,112 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:20,114 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 00:14:20,116 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 00:14:20,119 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:20,120 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:20,122 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33855] to rsgroup master 2023-07-21 00:14:20,122 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 00:14:20,122 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.CallRunner(144): callId: 174 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:44654 deadline: 1689899660122, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. 2023-07-21 00:14:20,123 WARN [Listener at localhost/41495] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 00:14:20,127 INFO [Listener at localhost/41495] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:20,128 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:20,128 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:20,129 INFO [Listener at localhost/41495] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33545, jenkins-hbase4.apache.org:42163, jenkins-hbase4.apache.org:43987, jenkins-hbase4.apache.org:46101], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 00:14:20,133 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 00:14:20,133 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:20,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo* 2023-07-21 00:14:20,137 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 00:14:20,137 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.CallRunner(144): callId: 180 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:44654 deadline: 1689899660136, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-21 00:14:20,140 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo@ 2023-07-21 00:14:20,140 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 00:14:20,140 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.CallRunner(144): callId: 182 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:44654 deadline: 1689899660140, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-21 00:14:20,142 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup - 2023-07-21 00:14:20,142 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 00:14:20,142 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.CallRunner(144): callId: 184 service: MasterService methodName: ExecMasterService size: 80 connection: 172.31.14.131:44654 deadline: 1689899660142, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-21 00:14:20,144 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo_123 2023-07-21 00:14:20,147 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/foo_123 2023-07-21 00:14:20,149 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:20,150 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:20,150 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 00:14:20,153 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 00:14:20,157 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:20,157 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:20,172 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:20,173 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:20,175 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 00:14:20,176 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 00:14:20,176 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 00:14:20,177 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 00:14:20,178 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 00:14:20,179 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup foo_123 2023-07-21 00:14:20,186 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:20,186 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:20,187 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 00:14:20,189 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 00:14:20,190 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 00:14:20,190 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 00:14:20,190 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 00:14:20,194 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 00:14:20,194 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 00:14:20,195 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 00:14:20,200 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:20,201 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 00:14:20,203 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 00:14:20,207 INFO [Listener at localhost/41495] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 00:14:20,208 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 00:14:20,213 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:20,213 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:20,215 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 00:14:20,221 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 00:14:20,226 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:20,226 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:20,228 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33855] to rsgroup master 2023-07-21 00:14:20,229 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 00:14:20,229 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.CallRunner(144): callId: 218 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:44654 deadline: 1689899660228, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. 2023-07-21 00:14:20,229 WARN [Listener at localhost/41495] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 00:14:20,231 INFO [Listener at localhost/41495] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:20,232 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:20,232 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:20,233 INFO [Listener at localhost/41495] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33545, jenkins-hbase4.apache.org:42163, jenkins-hbase4.apache.org:43987, jenkins-hbase4.apache.org:46101], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 00:14:20,235 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 00:14:20,235 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:20,255 INFO [Listener at localhost/41495] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=496 (was 493) Potentially hanging thread: hconnection-0x5a6bf6db-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5a6bf6db-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5a6bf6db-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=771 (was 771), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=523 (was 523), ProcessCount=177 (was 177), AvailableMemoryMB=2998 (was 3015) 2023-07-21 00:14:20,279 INFO [Listener at localhost/41495] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=496, OpenFileDescriptor=771, MaxFileDescriptor=60000, SystemLoadAverage=523, ProcessCount=177, AvailableMemoryMB=2997 2023-07-21 00:14:20,280 INFO [Listener at localhost/41495] rsgroup.TestRSGroupsBase(132): testFailRemoveGroup 2023-07-21 00:14:20,286 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:20,286 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:20,287 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 00:14:20,287 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 00:14:20,287 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 00:14:20,288 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 00:14:20,289 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 00:14:20,290 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 00:14:20,294 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:20,295 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 00:14:20,296 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 00:14:20,301 INFO [Listener at localhost/41495] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 00:14:20,302 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 00:14:20,305 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:20,305 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:20,307 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 00:14:20,309 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 00:14:20,312 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:20,312 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:20,315 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33855] to rsgroup master 2023-07-21 00:14:20,316 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 00:14:20,316 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.CallRunner(144): callId: 246 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:44654 deadline: 1689899660315, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. 2023-07-21 00:14:20,316 WARN [Listener at localhost/41495] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 00:14:20,318 INFO [Listener at localhost/41495] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:20,319 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:20,319 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:20,320 INFO [Listener at localhost/41495] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33545, jenkins-hbase4.apache.org:42163, jenkins-hbase4.apache.org:43987, jenkins-hbase4.apache.org:46101], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 00:14:20,321 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 00:14:20,321 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:20,322 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:20,322 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:20,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 00:14:20,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:20,324 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup bar 2023-07-21 00:14:20,327 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:20,327 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-21 00:14:20,330 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:20,330 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 00:14:20,332 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 00:14:20,334 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:20,335 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:20,337 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43987, jenkins-hbase4.apache.org:42163, jenkins-hbase4.apache.org:33545] to rsgroup bar 2023-07-21 00:14:20,340 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:20,341 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-21 00:14:20,343 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:20,343 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 00:14:20,345 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-21 00:14:20,345 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33545,1689898450890, jenkins-hbase4.apache.org,42163,1689898450682, jenkins-hbase4.apache.org,43987,1689898455241] are moved back to default 2023-07-21 00:14:20,345 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(438): Move servers done: default => bar 2023-07-21 00:14:20,345 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 00:14:20,348 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:20,348 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:20,352 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-21 00:14:20,352 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:20,355 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 00:14:20,356 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] procedure2.ProcedureExecutor(1029): Stored pid=72, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testFailRemoveGroup 2023-07-21 00:14:20,358 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=72, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 00:14:20,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testFailRemoveGroup" procId is: 72 2023-07-21 00:14:20,359 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=72 2023-07-21 00:14:20,361 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:20,361 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-21 00:14:20,362 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:20,362 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 00:14:20,365 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=72, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 00:14:20,367 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testFailRemoveGroup/995d3c49a59c9c6fcd21eb8dac5f2f21 2023-07-21 00:14:20,368 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testFailRemoveGroup/995d3c49a59c9c6fcd21eb8dac5f2f21 empty. 2023-07-21 00:14:20,368 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testFailRemoveGroup/995d3c49a59c9c6fcd21eb8dac5f2f21 2023-07-21 00:14:20,368 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-21 00:14:20,403 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testFailRemoveGroup/.tabledesc/.tableinfo.0000000001 2023-07-21 00:14:20,404 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 995d3c49a59c9c6fcd21eb8dac5f2f21, NAME => 'Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp 2023-07-21 00:14:20,425 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:20,425 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1604): Closing 995d3c49a59c9c6fcd21eb8dac5f2f21, disabling compactions & flushes 2023-07-21 00:14:20,425 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21. 2023-07-21 00:14:20,425 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21. 2023-07-21 00:14:20,425 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21. after waiting 0 ms 2023-07-21 00:14:20,425 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21. 2023-07-21 00:14:20,425 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21. 2023-07-21 00:14:20,425 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1558): Region close journal for 995d3c49a59c9c6fcd21eb8dac5f2f21: 2023-07-21 00:14:20,429 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=72, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 00:14:20,431 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689898460430"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898460430"}]},"ts":"1689898460430"} 2023-07-21 00:14:20,434 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 00:14:20,435 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=72, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 00:14:20,436 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689898460436"}]},"ts":"1689898460436"} 2023-07-21 00:14:20,438 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLING in hbase:meta 2023-07-21 00:14:20,442 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=73, ppid=72, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=995d3c49a59c9c6fcd21eb8dac5f2f21, ASSIGN}] 2023-07-21 00:14:20,445 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=73, ppid=72, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=995d3c49a59c9c6fcd21eb8dac5f2f21, ASSIGN 2023-07-21 00:14:20,446 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=73, ppid=72, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=995d3c49a59c9c6fcd21eb8dac5f2f21, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46101,1689898451098; forceNewPlan=false, retain=false 2023-07-21 00:14:20,461 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=72 2023-07-21 00:14:20,598 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=73 updating hbase:meta row=995d3c49a59c9c6fcd21eb8dac5f2f21, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46101,1689898451098 2023-07-21 00:14:20,598 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689898460598"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898460598"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898460598"}]},"ts":"1689898460598"} 2023-07-21 00:14:20,600 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=74, ppid=73, state=RUNNABLE; OpenRegionProcedure 995d3c49a59c9c6fcd21eb8dac5f2f21, server=jenkins-hbase4.apache.org,46101,1689898451098}] 2023-07-21 00:14:20,662 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=72 2023-07-21 00:14:20,762 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21. 2023-07-21 00:14:20,762 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 995d3c49a59c9c6fcd21eb8dac5f2f21, NAME => 'Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21.', STARTKEY => '', ENDKEY => ''} 2023-07-21 00:14:20,762 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 995d3c49a59c9c6fcd21eb8dac5f2f21 2023-07-21 00:14:20,763 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:20,763 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 995d3c49a59c9c6fcd21eb8dac5f2f21 2023-07-21 00:14:20,763 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 995d3c49a59c9c6fcd21eb8dac5f2f21 2023-07-21 00:14:20,766 INFO [StoreOpener-995d3c49a59c9c6fcd21eb8dac5f2f21-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 995d3c49a59c9c6fcd21eb8dac5f2f21 2023-07-21 00:14:20,779 DEBUG [StoreOpener-995d3c49a59c9c6fcd21eb8dac5f2f21-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testFailRemoveGroup/995d3c49a59c9c6fcd21eb8dac5f2f21/f 2023-07-21 00:14:20,779 DEBUG [StoreOpener-995d3c49a59c9c6fcd21eb8dac5f2f21-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testFailRemoveGroup/995d3c49a59c9c6fcd21eb8dac5f2f21/f 2023-07-21 00:14:20,780 INFO [StoreOpener-995d3c49a59c9c6fcd21eb8dac5f2f21-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 995d3c49a59c9c6fcd21eb8dac5f2f21 columnFamilyName f 2023-07-21 00:14:20,781 INFO [StoreOpener-995d3c49a59c9c6fcd21eb8dac5f2f21-1] regionserver.HStore(310): Store=995d3c49a59c9c6fcd21eb8dac5f2f21/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:20,789 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testFailRemoveGroup/995d3c49a59c9c6fcd21eb8dac5f2f21 2023-07-21 00:14:20,790 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testFailRemoveGroup/995d3c49a59c9c6fcd21eb8dac5f2f21 2023-07-21 00:14:20,794 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 995d3c49a59c9c6fcd21eb8dac5f2f21 2023-07-21 00:14:20,814 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testFailRemoveGroup/995d3c49a59c9c6fcd21eb8dac5f2f21/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 00:14:20,815 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 995d3c49a59c9c6fcd21eb8dac5f2f21; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10963944960, jitterRate=0.02109694480895996}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 00:14:20,815 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 995d3c49a59c9c6fcd21eb8dac5f2f21: 2023-07-21 00:14:20,816 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21., pid=74, masterSystemTime=1689898460756 2023-07-21 00:14:20,818 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21. 2023-07-21 00:14:20,818 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21. 2023-07-21 00:14:20,823 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=73 updating hbase:meta row=995d3c49a59c9c6fcd21eb8dac5f2f21, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46101,1689898451098 2023-07-21 00:14:20,823 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689898460823"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689898460823"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689898460823"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689898460823"}]},"ts":"1689898460823"} 2023-07-21 00:14:20,828 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=74, resume processing ppid=73 2023-07-21 00:14:20,828 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=74, ppid=73, state=SUCCESS; OpenRegionProcedure 995d3c49a59c9c6fcd21eb8dac5f2f21, server=jenkins-hbase4.apache.org,46101,1689898451098 in 225 msec 2023-07-21 00:14:20,835 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=73, resume processing ppid=72 2023-07-21 00:14:20,835 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=73, ppid=72, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=995d3c49a59c9c6fcd21eb8dac5f2f21, ASSIGN in 386 msec 2023-07-21 00:14:20,837 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=72, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 00:14:20,837 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689898460837"}]},"ts":"1689898460837"} 2023-07-21 00:14:20,839 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLED in hbase:meta 2023-07-21 00:14:20,842 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=72, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 00:14:20,848 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=72, state=SUCCESS; CreateTableProcedure table=Group_testFailRemoveGroup in 487 msec 2023-07-21 00:14:20,964 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=72 2023-07-21 00:14:20,964 INFO [Listener at localhost/41495] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testFailRemoveGroup, procId: 72 completed 2023-07-21 00:14:20,964 DEBUG [Listener at localhost/41495] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testFailRemoveGroup get assigned. Timeout = 60000ms 2023-07-21 00:14:20,965 INFO [Listener at localhost/41495] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:20,969 INFO [Listener at localhost/41495] hbase.HBaseTestingUtility(3484): All regions for table Group_testFailRemoveGroup assigned to meta. Checking AM states. 2023-07-21 00:14:20,970 INFO [Listener at localhost/41495] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:20,970 INFO [Listener at localhost/41495] hbase.HBaseTestingUtility(3504): All regions for table Group_testFailRemoveGroup assigned. 2023-07-21 00:14:20,972 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup bar 2023-07-21 00:14:20,975 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:20,975 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-21 00:14:20,976 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:20,976 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 00:14:20,982 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup bar 2023-07-21 00:14:20,982 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(345): Moving region 995d3c49a59c9c6fcd21eb8dac5f2f21 to RSGroup bar 2023-07-21 00:14:20,982 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 00:14:20,982 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 00:14:20,982 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 00:14:20,982 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 00:14:20,982 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-21 00:14:20,982 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 00:14:20,983 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] procedure2.ProcedureExecutor(1029): Stored pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=995d3c49a59c9c6fcd21eb8dac5f2f21, REOPEN/MOVE 2023-07-21 00:14:20,984 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group bar, current retry=0 2023-07-21 00:14:20,985 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=995d3c49a59c9c6fcd21eb8dac5f2f21, REOPEN/MOVE 2023-07-21 00:14:20,986 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=995d3c49a59c9c6fcd21eb8dac5f2f21, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46101,1689898451098 2023-07-21 00:14:20,986 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689898460986"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898460986"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898460986"}]},"ts":"1689898460986"} 2023-07-21 00:14:20,988 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=76, ppid=75, state=RUNNABLE; CloseRegionProcedure 995d3c49a59c9c6fcd21eb8dac5f2f21, server=jenkins-hbase4.apache.org,46101,1689898451098}] 2023-07-21 00:14:21,146 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 995d3c49a59c9c6fcd21eb8dac5f2f21 2023-07-21 00:14:21,147 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 995d3c49a59c9c6fcd21eb8dac5f2f21, disabling compactions & flushes 2023-07-21 00:14:21,147 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21. 2023-07-21 00:14:21,147 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21. 2023-07-21 00:14:21,147 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21. after waiting 0 ms 2023-07-21 00:14:21,148 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21. 2023-07-21 00:14:21,153 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testFailRemoveGroup/995d3c49a59c9c6fcd21eb8dac5f2f21/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 00:14:21,154 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21. 2023-07-21 00:14:21,154 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 995d3c49a59c9c6fcd21eb8dac5f2f21: 2023-07-21 00:14:21,154 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 995d3c49a59c9c6fcd21eb8dac5f2f21 move to jenkins-hbase4.apache.org,42163,1689898450682 record at close sequenceid=2 2023-07-21 00:14:21,156 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 995d3c49a59c9c6fcd21eb8dac5f2f21 2023-07-21 00:14:21,157 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=995d3c49a59c9c6fcd21eb8dac5f2f21, regionState=CLOSED 2023-07-21 00:14:21,158 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689898461157"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898461157"}]},"ts":"1689898461157"} 2023-07-21 00:14:21,162 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=76, resume processing ppid=75 2023-07-21 00:14:21,162 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=76, ppid=75, state=SUCCESS; CloseRegionProcedure 995d3c49a59c9c6fcd21eb8dac5f2f21, server=jenkins-hbase4.apache.org,46101,1689898451098 in 172 msec 2023-07-21 00:14:21,163 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=995d3c49a59c9c6fcd21eb8dac5f2f21, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,42163,1689898450682; forceNewPlan=false, retain=false 2023-07-21 00:14:21,314 INFO [jenkins-hbase4:33855] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 00:14:21,314 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=995d3c49a59c9c6fcd21eb8dac5f2f21, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42163,1689898450682 2023-07-21 00:14:21,314 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689898461314"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898461314"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898461314"}]},"ts":"1689898461314"} 2023-07-21 00:14:21,320 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=77, ppid=75, state=RUNNABLE; OpenRegionProcedure 995d3c49a59c9c6fcd21eb8dac5f2f21, server=jenkins-hbase4.apache.org,42163,1689898450682}] 2023-07-21 00:14:21,477 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21. 2023-07-21 00:14:21,478 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 995d3c49a59c9c6fcd21eb8dac5f2f21, NAME => 'Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21.', STARTKEY => '', ENDKEY => ''} 2023-07-21 00:14:21,478 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 995d3c49a59c9c6fcd21eb8dac5f2f21 2023-07-21 00:14:21,478 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:21,478 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 995d3c49a59c9c6fcd21eb8dac5f2f21 2023-07-21 00:14:21,478 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 995d3c49a59c9c6fcd21eb8dac5f2f21 2023-07-21 00:14:21,482 INFO [StoreOpener-995d3c49a59c9c6fcd21eb8dac5f2f21-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 995d3c49a59c9c6fcd21eb8dac5f2f21 2023-07-21 00:14:21,483 DEBUG [StoreOpener-995d3c49a59c9c6fcd21eb8dac5f2f21-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testFailRemoveGroup/995d3c49a59c9c6fcd21eb8dac5f2f21/f 2023-07-21 00:14:21,483 DEBUG [StoreOpener-995d3c49a59c9c6fcd21eb8dac5f2f21-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testFailRemoveGroup/995d3c49a59c9c6fcd21eb8dac5f2f21/f 2023-07-21 00:14:21,484 INFO [StoreOpener-995d3c49a59c9c6fcd21eb8dac5f2f21-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 995d3c49a59c9c6fcd21eb8dac5f2f21 columnFamilyName f 2023-07-21 00:14:21,485 INFO [StoreOpener-995d3c49a59c9c6fcd21eb8dac5f2f21-1] regionserver.HStore(310): Store=995d3c49a59c9c6fcd21eb8dac5f2f21/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:21,486 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testFailRemoveGroup/995d3c49a59c9c6fcd21eb8dac5f2f21 2023-07-21 00:14:21,487 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testFailRemoveGroup/995d3c49a59c9c6fcd21eb8dac5f2f21 2023-07-21 00:14:21,499 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 995d3c49a59c9c6fcd21eb8dac5f2f21 2023-07-21 00:14:21,501 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 995d3c49a59c9c6fcd21eb8dac5f2f21; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10705781600, jitterRate=-0.0029463917016983032}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 00:14:21,501 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 995d3c49a59c9c6fcd21eb8dac5f2f21: 2023-07-21 00:14:21,502 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21., pid=77, masterSystemTime=1689898461473 2023-07-21 00:14:21,506 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21. 2023-07-21 00:14:21,506 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21. 2023-07-21 00:14:21,507 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=995d3c49a59c9c6fcd21eb8dac5f2f21, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,42163,1689898450682 2023-07-21 00:14:21,507 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689898461507"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689898461507"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689898461507"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689898461507"}]},"ts":"1689898461507"} 2023-07-21 00:14:21,512 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=77, resume processing ppid=75 2023-07-21 00:14:21,512 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=77, ppid=75, state=SUCCESS; OpenRegionProcedure 995d3c49a59c9c6fcd21eb8dac5f2f21, server=jenkins-hbase4.apache.org,42163,1689898450682 in 192 msec 2023-07-21 00:14:21,513 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=75, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=995d3c49a59c9c6fcd21eb8dac5f2f21, REOPEN/MOVE in 530 msec 2023-07-21 00:14:21,985 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] procedure.ProcedureSyncWait(216): waitFor pid=75 2023-07-21 00:14:21,985 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group bar. 2023-07-21 00:14:21,985 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 00:14:21,989 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:21,989 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:21,992 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-21 00:14:21,993 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:21,994 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-21 00:14:21,995 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:490) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 00:14:21,995 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.CallRunner(144): callId: 284 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:44654 deadline: 1689899661994, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. 2023-07-21 00:14:21,996 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43987, jenkins-hbase4.apache.org:42163, jenkins-hbase4.apache.org:33545] to rsgroup default 2023-07-21 00:14:21,997 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:428) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 00:14:21,997 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.CallRunner(144): callId: 286 service: MasterService methodName: ExecMasterService size: 188 connection: 172.31.14.131:44654 deadline: 1689899661996, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. 2023-07-21 00:14:22,000 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup default 2023-07-21 00:14:22,003 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:22,003 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-21 00:14:22,003 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:22,004 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 00:14:22,006 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup default 2023-07-21 00:14:22,006 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(345): Moving region 995d3c49a59c9c6fcd21eb8dac5f2f21 to RSGroup default 2023-07-21 00:14:22,007 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] procedure2.ProcedureExecutor(1029): Stored pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=995d3c49a59c9c6fcd21eb8dac5f2f21, REOPEN/MOVE 2023-07-21 00:14:22,008 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-21 00:14:22,009 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=995d3c49a59c9c6fcd21eb8dac5f2f21, REOPEN/MOVE 2023-07-21 00:14:22,010 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=995d3c49a59c9c6fcd21eb8dac5f2f21, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42163,1689898450682 2023-07-21 00:14:22,010 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689898462010"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898462010"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898462010"}]},"ts":"1689898462010"} 2023-07-21 00:14:22,012 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=79, ppid=78, state=RUNNABLE; CloseRegionProcedure 995d3c49a59c9c6fcd21eb8dac5f2f21, server=jenkins-hbase4.apache.org,42163,1689898450682}] 2023-07-21 00:14:22,164 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 995d3c49a59c9c6fcd21eb8dac5f2f21 2023-07-21 00:14:22,165 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 995d3c49a59c9c6fcd21eb8dac5f2f21, disabling compactions & flushes 2023-07-21 00:14:22,166 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21. 2023-07-21 00:14:22,166 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21. 2023-07-21 00:14:22,166 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21. after waiting 0 ms 2023-07-21 00:14:22,166 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21. 2023-07-21 00:14:22,170 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testFailRemoveGroup/995d3c49a59c9c6fcd21eb8dac5f2f21/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 00:14:22,171 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21. 2023-07-21 00:14:22,171 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 995d3c49a59c9c6fcd21eb8dac5f2f21: 2023-07-21 00:14:22,171 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 995d3c49a59c9c6fcd21eb8dac5f2f21 move to jenkins-hbase4.apache.org,46101,1689898451098 record at close sequenceid=5 2023-07-21 00:14:22,172 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 995d3c49a59c9c6fcd21eb8dac5f2f21 2023-07-21 00:14:22,173 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=995d3c49a59c9c6fcd21eb8dac5f2f21, regionState=CLOSED 2023-07-21 00:14:22,173 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689898462173"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898462173"}]},"ts":"1689898462173"} 2023-07-21 00:14:22,176 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=79, resume processing ppid=78 2023-07-21 00:14:22,176 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=79, ppid=78, state=SUCCESS; CloseRegionProcedure 995d3c49a59c9c6fcd21eb8dac5f2f21, server=jenkins-hbase4.apache.org,42163,1689898450682 in 162 msec 2023-07-21 00:14:22,177 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=995d3c49a59c9c6fcd21eb8dac5f2f21, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,46101,1689898451098; forceNewPlan=false, retain=false 2023-07-21 00:14:22,327 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=995d3c49a59c9c6fcd21eb8dac5f2f21, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46101,1689898451098 2023-07-21 00:14:22,327 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689898462327"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898462327"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898462327"}]},"ts":"1689898462327"} 2023-07-21 00:14:22,329 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=80, ppid=78, state=RUNNABLE; OpenRegionProcedure 995d3c49a59c9c6fcd21eb8dac5f2f21, server=jenkins-hbase4.apache.org,46101,1689898451098}] 2023-07-21 00:14:22,486 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21. 2023-07-21 00:14:22,486 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 995d3c49a59c9c6fcd21eb8dac5f2f21, NAME => 'Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21.', STARTKEY => '', ENDKEY => ''} 2023-07-21 00:14:22,486 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 995d3c49a59c9c6fcd21eb8dac5f2f21 2023-07-21 00:14:22,486 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:22,487 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 995d3c49a59c9c6fcd21eb8dac5f2f21 2023-07-21 00:14:22,487 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 995d3c49a59c9c6fcd21eb8dac5f2f21 2023-07-21 00:14:22,488 INFO [StoreOpener-995d3c49a59c9c6fcd21eb8dac5f2f21-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 995d3c49a59c9c6fcd21eb8dac5f2f21 2023-07-21 00:14:22,489 DEBUG [StoreOpener-995d3c49a59c9c6fcd21eb8dac5f2f21-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testFailRemoveGroup/995d3c49a59c9c6fcd21eb8dac5f2f21/f 2023-07-21 00:14:22,489 DEBUG [StoreOpener-995d3c49a59c9c6fcd21eb8dac5f2f21-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testFailRemoveGroup/995d3c49a59c9c6fcd21eb8dac5f2f21/f 2023-07-21 00:14:22,490 INFO [StoreOpener-995d3c49a59c9c6fcd21eb8dac5f2f21-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 995d3c49a59c9c6fcd21eb8dac5f2f21 columnFamilyName f 2023-07-21 00:14:22,490 INFO [StoreOpener-995d3c49a59c9c6fcd21eb8dac5f2f21-1] regionserver.HStore(310): Store=995d3c49a59c9c6fcd21eb8dac5f2f21/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:22,491 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testFailRemoveGroup/995d3c49a59c9c6fcd21eb8dac5f2f21 2023-07-21 00:14:22,493 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testFailRemoveGroup/995d3c49a59c9c6fcd21eb8dac5f2f21 2023-07-21 00:14:22,497 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 995d3c49a59c9c6fcd21eb8dac5f2f21 2023-07-21 00:14:22,498 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 995d3c49a59c9c6fcd21eb8dac5f2f21; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9733788960, jitterRate=-0.0934702605009079}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 00:14:22,498 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 995d3c49a59c9c6fcd21eb8dac5f2f21: 2023-07-21 00:14:22,499 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21., pid=80, masterSystemTime=1689898462481 2023-07-21 00:14:22,500 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21. 2023-07-21 00:14:22,500 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21. 2023-07-21 00:14:22,501 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=995d3c49a59c9c6fcd21eb8dac5f2f21, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,46101,1689898451098 2023-07-21 00:14:22,501 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689898462501"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689898462501"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689898462501"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689898462501"}]},"ts":"1689898462501"} 2023-07-21 00:14:22,505 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=80, resume processing ppid=78 2023-07-21 00:14:22,505 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=80, ppid=78, state=SUCCESS; OpenRegionProcedure 995d3c49a59c9c6fcd21eb8dac5f2f21, server=jenkins-hbase4.apache.org,46101,1689898451098 in 174 msec 2023-07-21 00:14:22,507 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=78, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=995d3c49a59c9c6fcd21eb8dac5f2f21, REOPEN/MOVE in 499 msec 2023-07-21 00:14:23,009 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] procedure.ProcedureSyncWait(216): waitFor pid=78 2023-07-21 00:14:23,009 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group default. 2023-07-21 00:14:23,009 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 00:14:23,013 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:23,013 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:23,015 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-21 00:14:23,016 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:496) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 00:14:23,016 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.CallRunner(144): callId: 293 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:44654 deadline: 1689899663015, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. 2023-07-21 00:14:23,017 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43987, jenkins-hbase4.apache.org:42163, jenkins-hbase4.apache.org:33545] to rsgroup default 2023-07-21 00:14:23,019 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:23,020 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-21 00:14:23,020 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:23,020 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 00:14:23,022 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group bar, current retry=0 2023-07-21 00:14:23,022 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33545,1689898450890, jenkins-hbase4.apache.org,42163,1689898450682, jenkins-hbase4.apache.org,43987,1689898455241] are moved back to bar 2023-07-21 00:14:23,022 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(438): Move servers done: bar => default 2023-07-21 00:14:23,022 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 00:14:23,025 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:23,026 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:23,028 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-21 00:14:23,033 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:23,034 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:23,034 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 00:14:23,036 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 00:14:23,039 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:23,039 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:23,042 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:23,042 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:23,044 INFO [Listener at localhost/41495] client.HBaseAdmin$15(890): Started disable of Group_testFailRemoveGroup 2023-07-21 00:14:23,045 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testFailRemoveGroup 2023-07-21 00:14:23,046 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] procedure2.ProcedureExecutor(1029): Stored pid=81, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testFailRemoveGroup 2023-07-21 00:14:23,049 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-21 00:14:23,050 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689898463050"}]},"ts":"1689898463050"} 2023-07-21 00:14:23,051 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLING in hbase:meta 2023-07-21 00:14:23,054 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testFailRemoveGroup to state=DISABLING 2023-07-21 00:14:23,055 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=995d3c49a59c9c6fcd21eb8dac5f2f21, UNASSIGN}] 2023-07-21 00:14:23,061 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=995d3c49a59c9c6fcd21eb8dac5f2f21, UNASSIGN 2023-07-21 00:14:23,063 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=82 updating hbase:meta row=995d3c49a59c9c6fcd21eb8dac5f2f21, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46101,1689898451098 2023-07-21 00:14:23,063 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689898463063"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898463063"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898463063"}]},"ts":"1689898463063"} 2023-07-21 00:14:23,065 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=83, ppid=82, state=RUNNABLE; CloseRegionProcedure 995d3c49a59c9c6fcd21eb8dac5f2f21, server=jenkins-hbase4.apache.org,46101,1689898451098}] 2023-07-21 00:14:23,151 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-21 00:14:23,220 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 995d3c49a59c9c6fcd21eb8dac5f2f21 2023-07-21 00:14:23,221 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 995d3c49a59c9c6fcd21eb8dac5f2f21, disabling compactions & flushes 2023-07-21 00:14:23,221 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21. 2023-07-21 00:14:23,221 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21. 2023-07-21 00:14:23,221 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21. after waiting 0 ms 2023-07-21 00:14:23,221 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21. 2023-07-21 00:14:23,232 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testFailRemoveGroup/995d3c49a59c9c6fcd21eb8dac5f2f21/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-21 00:14:23,233 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21. 2023-07-21 00:14:23,233 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 995d3c49a59c9c6fcd21eb8dac5f2f21: 2023-07-21 00:14:23,235 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 995d3c49a59c9c6fcd21eb8dac5f2f21 2023-07-21 00:14:23,236 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=82 updating hbase:meta row=995d3c49a59c9c6fcd21eb8dac5f2f21, regionState=CLOSED 2023-07-21 00:14:23,236 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689898463236"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898463236"}]},"ts":"1689898463236"} 2023-07-21 00:14:23,242 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=83, resume processing ppid=82 2023-07-21 00:14:23,242 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=83, ppid=82, state=SUCCESS; CloseRegionProcedure 995d3c49a59c9c6fcd21eb8dac5f2f21, server=jenkins-hbase4.apache.org,46101,1689898451098 in 175 msec 2023-07-21 00:14:23,247 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=82, resume processing ppid=81 2023-07-21 00:14:23,247 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=82, ppid=81, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=995d3c49a59c9c6fcd21eb8dac5f2f21, UNASSIGN in 187 msec 2023-07-21 00:14:23,248 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689898463248"}]},"ts":"1689898463248"} 2023-07-21 00:14:23,250 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLED in hbase:meta 2023-07-21 00:14:23,252 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testFailRemoveGroup to state=DISABLED 2023-07-21 00:14:23,254 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=81, state=SUCCESS; DisableTableProcedure table=Group_testFailRemoveGroup in 207 msec 2023-07-21 00:14:23,352 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-21 00:14:23,352 INFO [Listener at localhost/41495] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testFailRemoveGroup, procId: 81 completed 2023-07-21 00:14:23,353 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testFailRemoveGroup 2023-07-21 00:14:23,354 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] procedure2.ProcedureExecutor(1029): Stored pid=84, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-21 00:14:23,357 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=84, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-21 00:14:23,357 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testFailRemoveGroup' from rsgroup 'default' 2023-07-21 00:14:23,360 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:23,361 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:23,362 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 00:14:23,363 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=84, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-21 00:14:23,366 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-21 00:14:23,368 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testFailRemoveGroup/995d3c49a59c9c6fcd21eb8dac5f2f21 2023-07-21 00:14:23,381 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testFailRemoveGroup/995d3c49a59c9c6fcd21eb8dac5f2f21/f, FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testFailRemoveGroup/995d3c49a59c9c6fcd21eb8dac5f2f21/recovered.edits] 2023-07-21 00:14:23,390 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testFailRemoveGroup/995d3c49a59c9c6fcd21eb8dac5f2f21/recovered.edits/10.seqid to hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/archive/data/default/Group_testFailRemoveGroup/995d3c49a59c9c6fcd21eb8dac5f2f21/recovered.edits/10.seqid 2023-07-21 00:14:23,391 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testFailRemoveGroup/995d3c49a59c9c6fcd21eb8dac5f2f21 2023-07-21 00:14:23,391 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-21 00:14:23,394 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=84, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-21 00:14:23,401 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testFailRemoveGroup from hbase:meta 2023-07-21 00:14:23,407 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'Group_testFailRemoveGroup' descriptor. 2023-07-21 00:14:23,409 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=84, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-21 00:14:23,409 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'Group_testFailRemoveGroup' from region states. 2023-07-21 00:14:23,409 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689898463409"}]},"ts":"9223372036854775807"} 2023-07-21 00:14:23,411 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-21 00:14:23,411 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 995d3c49a59c9c6fcd21eb8dac5f2f21, NAME => 'Group_testFailRemoveGroup,,1689898460354.995d3c49a59c9c6fcd21eb8dac5f2f21.', STARTKEY => '', ENDKEY => ''}] 2023-07-21 00:14:23,411 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'Group_testFailRemoveGroup' as deleted. 2023-07-21 00:14:23,411 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689898463411"}]},"ts":"9223372036854775807"} 2023-07-21 00:14:23,415 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table Group_testFailRemoveGroup state from META 2023-07-21 00:14:23,418 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=84, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-21 00:14:23,419 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=84, state=SUCCESS; DeleteTableProcedure table=Group_testFailRemoveGroup in 65 msec 2023-07-21 00:14:23,467 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-21 00:14:23,468 INFO [Listener at localhost/41495] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testFailRemoveGroup, procId: 84 completed 2023-07-21 00:14:23,473 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:23,473 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:23,474 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 00:14:23,475 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 00:14:23,475 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 00:14:23,478 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 00:14:23,478 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 00:14:23,479 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 00:14:23,483 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:23,484 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 00:14:23,485 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 00:14:23,489 INFO [Listener at localhost/41495] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 00:14:23,490 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 00:14:23,493 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:23,494 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:23,496 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 00:14:23,501 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 00:14:23,505 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:23,506 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:23,510 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33855] to rsgroup master 2023-07-21 00:14:23,511 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 00:14:23,511 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.CallRunner(144): callId: 341 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:44654 deadline: 1689899663510, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. 2023-07-21 00:14:23,511 WARN [Listener at localhost/41495] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 00:14:23,514 INFO [Listener at localhost/41495] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:23,514 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:23,515 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:23,515 INFO [Listener at localhost/41495] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33545, jenkins-hbase4.apache.org:42163, jenkins-hbase4.apache.org:43987, jenkins-hbase4.apache.org:46101], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 00:14:23,516 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 00:14:23,516 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:23,536 INFO [Listener at localhost/41495] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=498 (was 496) Potentially hanging thread: hconnection-0x27b82929-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7db6cade-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5a6bf6db-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x27b82929-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x27b82929-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1085622103_17 at /127.0.0.1:53970 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x27b82929-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x27b82929-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/cluster_7bdf7489-55f0-e02d-1b93-10dd76f66c48/dfs/data/data3/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5a6bf6db-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/cluster_7bdf7489-55f0-e02d-1b93-10dd76f66c48/dfs/data/data4/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_968305499_17 at /127.0.0.1:48056 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=769 (was 771), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=523 (was 523), ProcessCount=177 (was 177), AvailableMemoryMB=2911 (was 2997) 2023-07-21 00:14:23,560 INFO [Listener at localhost/41495] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=498, OpenFileDescriptor=769, MaxFileDescriptor=60000, SystemLoadAverage=523, ProcessCount=177, AvailableMemoryMB=2910 2023-07-21 00:14:23,560 INFO [Listener at localhost/41495] rsgroup.TestRSGroupsBase(132): testMultiTableMove 2023-07-21 00:14:23,566 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:23,567 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:23,569 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 00:14:23,569 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 00:14:23,569 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 00:14:23,573 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 00:14:23,573 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 00:14:23,574 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 00:14:23,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:23,579 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 00:14:23,581 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 00:14:23,585 INFO [Listener at localhost/41495] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 00:14:23,586 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 00:14:23,590 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:23,591 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:23,593 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 00:14:23,594 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 00:14:23,599 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:23,599 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:23,602 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33855] to rsgroup master 2023-07-21 00:14:23,602 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 00:14:23,602 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.CallRunner(144): callId: 369 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:44654 deadline: 1689899663602, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. 2023-07-21 00:14:23,603 WARN [Listener at localhost/41495] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 00:14:23,607 INFO [Listener at localhost/41495] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:23,608 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:23,609 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:23,609 INFO [Listener at localhost/41495] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33545, jenkins-hbase4.apache.org:42163, jenkins-hbase4.apache.org:43987, jenkins-hbase4.apache.org:46101], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 00:14:23,610 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 00:14:23,610 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:23,617 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 00:14:23,619 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:23,621 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testMultiTableMove_490356625 2023-07-21 00:14:23,624 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_490356625 2023-07-21 00:14:23,627 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:23,627 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:23,627 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 00:14:23,629 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 00:14:23,632 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:23,632 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:23,634 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33545] to rsgroup Group_testMultiTableMove_490356625 2023-07-21 00:14:23,637 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_490356625 2023-07-21 00:14:23,637 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:23,638 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:23,638 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 00:14:23,640 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-21 00:14:23,640 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33545,1689898450890] are moved back to default 2023-07-21 00:14:23,640 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testMultiTableMove_490356625 2023-07-21 00:14:23,640 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 00:14:23,643 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:23,644 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:23,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_490356625 2023-07-21 00:14:23,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:23,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 00:14:23,650 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] procedure2.ProcedureExecutor(1029): Stored pid=85, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveA 2023-07-21 00:14:23,652 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=85, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 00:14:23,653 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveA" procId is: 85 2023-07-21 00:14:23,655 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=85 2023-07-21 00:14:23,660 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_490356625 2023-07-21 00:14:23,660 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:23,661 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:23,661 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 00:14:23,664 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=85, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 00:14:23,666 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/GrouptestMultiTableMoveA/0ef97b1a79059723a683adaa3270d7d1 2023-07-21 00:14:23,666 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/GrouptestMultiTableMoveA/0ef97b1a79059723a683adaa3270d7d1 empty. 2023-07-21 00:14:23,667 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/GrouptestMultiTableMoveA/0ef97b1a79059723a683adaa3270d7d1 2023-07-21 00:14:23,667 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-21 00:14:23,690 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/GrouptestMultiTableMoveA/.tabledesc/.tableinfo.0000000001 2023-07-21 00:14:23,691 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(7675): creating {ENCODED => 0ef97b1a79059723a683adaa3270d7d1, NAME => 'GrouptestMultiTableMoveA,,1689898463648.0ef97b1a79059723a683adaa3270d7d1.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp 2023-07-21 00:14:23,711 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689898463648.0ef97b1a79059723a683adaa3270d7d1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:23,712 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1604): Closing 0ef97b1a79059723a683adaa3270d7d1, disabling compactions & flushes 2023-07-21 00:14:23,712 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689898463648.0ef97b1a79059723a683adaa3270d7d1. 2023-07-21 00:14:23,712 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689898463648.0ef97b1a79059723a683adaa3270d7d1. 2023-07-21 00:14:23,712 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689898463648.0ef97b1a79059723a683adaa3270d7d1. after waiting 0 ms 2023-07-21 00:14:23,712 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689898463648.0ef97b1a79059723a683adaa3270d7d1. 2023-07-21 00:14:23,712 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689898463648.0ef97b1a79059723a683adaa3270d7d1. 2023-07-21 00:14:23,712 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1558): Region close journal for 0ef97b1a79059723a683adaa3270d7d1: 2023-07-21 00:14:23,715 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=85, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 00:14:23,716 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689898463648.0ef97b1a79059723a683adaa3270d7d1.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689898463716"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898463716"}]},"ts":"1689898463716"} 2023-07-21 00:14:23,718 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 00:14:23,718 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=85, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 00:14:23,719 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689898463719"}]},"ts":"1689898463719"} 2023-07-21 00:14:23,720 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLING in hbase:meta 2023-07-21 00:14:23,725 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 00:14:23,725 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 00:14:23,725 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 00:14:23,725 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 00:14:23,725 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 00:14:23,725 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=86, ppid=85, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=0ef97b1a79059723a683adaa3270d7d1, ASSIGN}] 2023-07-21 00:14:23,728 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=86, ppid=85, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=0ef97b1a79059723a683adaa3270d7d1, ASSIGN 2023-07-21 00:14:23,728 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=86, ppid=85, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=0ef97b1a79059723a683adaa3270d7d1, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42163,1689898450682; forceNewPlan=false, retain=false 2023-07-21 00:14:23,757 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=85 2023-07-21 00:14:23,879 INFO [jenkins-hbase4:33855] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 00:14:23,880 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=86 updating hbase:meta row=0ef97b1a79059723a683adaa3270d7d1, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42163,1689898450682 2023-07-21 00:14:23,881 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689898463648.0ef97b1a79059723a683adaa3270d7d1.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689898463880"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898463880"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898463880"}]},"ts":"1689898463880"} 2023-07-21 00:14:23,883 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=87, ppid=86, state=RUNNABLE; OpenRegionProcedure 0ef97b1a79059723a683adaa3270d7d1, server=jenkins-hbase4.apache.org,42163,1689898450682}] 2023-07-21 00:14:23,959 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=85 2023-07-21 00:14:24,040 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689898463648.0ef97b1a79059723a683adaa3270d7d1. 2023-07-21 00:14:24,040 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0ef97b1a79059723a683adaa3270d7d1, NAME => 'GrouptestMultiTableMoveA,,1689898463648.0ef97b1a79059723a683adaa3270d7d1.', STARTKEY => '', ENDKEY => ''} 2023-07-21 00:14:24,041 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 0ef97b1a79059723a683adaa3270d7d1 2023-07-21 00:14:24,041 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689898463648.0ef97b1a79059723a683adaa3270d7d1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:24,041 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 0ef97b1a79059723a683adaa3270d7d1 2023-07-21 00:14:24,041 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 0ef97b1a79059723a683adaa3270d7d1 2023-07-21 00:14:24,043 INFO [StoreOpener-0ef97b1a79059723a683adaa3270d7d1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 0ef97b1a79059723a683adaa3270d7d1 2023-07-21 00:14:24,046 DEBUG [StoreOpener-0ef97b1a79059723a683adaa3270d7d1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/GrouptestMultiTableMoveA/0ef97b1a79059723a683adaa3270d7d1/f 2023-07-21 00:14:24,046 DEBUG [StoreOpener-0ef97b1a79059723a683adaa3270d7d1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/GrouptestMultiTableMoveA/0ef97b1a79059723a683adaa3270d7d1/f 2023-07-21 00:14:24,047 INFO [StoreOpener-0ef97b1a79059723a683adaa3270d7d1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0ef97b1a79059723a683adaa3270d7d1 columnFamilyName f 2023-07-21 00:14:24,048 INFO [StoreOpener-0ef97b1a79059723a683adaa3270d7d1-1] regionserver.HStore(310): Store=0ef97b1a79059723a683adaa3270d7d1/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:24,049 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/GrouptestMultiTableMoveA/0ef97b1a79059723a683adaa3270d7d1 2023-07-21 00:14:24,049 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/GrouptestMultiTableMoveA/0ef97b1a79059723a683adaa3270d7d1 2023-07-21 00:14:24,054 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 0ef97b1a79059723a683adaa3270d7d1 2023-07-21 00:14:24,058 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/GrouptestMultiTableMoveA/0ef97b1a79059723a683adaa3270d7d1/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 00:14:24,059 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 0ef97b1a79059723a683adaa3270d7d1; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10030780480, jitterRate=-0.06581076979637146}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 00:14:24,059 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 0ef97b1a79059723a683adaa3270d7d1: 2023-07-21 00:14:24,060 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689898463648.0ef97b1a79059723a683adaa3270d7d1., pid=87, masterSystemTime=1689898464034 2023-07-21 00:14:24,062 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689898463648.0ef97b1a79059723a683adaa3270d7d1. 2023-07-21 00:14:24,062 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689898463648.0ef97b1a79059723a683adaa3270d7d1. 2023-07-21 00:14:24,062 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=86 updating hbase:meta row=0ef97b1a79059723a683adaa3270d7d1, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42163,1689898450682 2023-07-21 00:14:24,062 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689898463648.0ef97b1a79059723a683adaa3270d7d1.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689898464062"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689898464062"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689898464062"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689898464062"}]},"ts":"1689898464062"} 2023-07-21 00:14:24,075 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=87, resume processing ppid=86 2023-07-21 00:14:24,075 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=87, ppid=86, state=SUCCESS; OpenRegionProcedure 0ef97b1a79059723a683adaa3270d7d1, server=jenkins-hbase4.apache.org,42163,1689898450682 in 190 msec 2023-07-21 00:14:24,078 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=86, resume processing ppid=85 2023-07-21 00:14:24,078 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=86, ppid=85, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=0ef97b1a79059723a683adaa3270d7d1, ASSIGN in 350 msec 2023-07-21 00:14:24,080 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=85, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 00:14:24,080 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689898464080"}]},"ts":"1689898464080"} 2023-07-21 00:14:24,082 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLED in hbase:meta 2023-07-21 00:14:24,087 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=85, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 00:14:24,089 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=85, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveA in 438 msec 2023-07-21 00:14:24,260 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=85 2023-07-21 00:14:24,260 INFO [Listener at localhost/41495] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveA, procId: 85 completed 2023-07-21 00:14:24,261 DEBUG [Listener at localhost/41495] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveA get assigned. Timeout = 60000ms 2023-07-21 00:14:24,261 INFO [Listener at localhost/41495] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:24,265 INFO [Listener at localhost/41495] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveA assigned to meta. Checking AM states. 2023-07-21 00:14:24,266 INFO [Listener at localhost/41495] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:24,266 INFO [Listener at localhost/41495] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveA assigned. 2023-07-21 00:14:24,268 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 00:14:24,269 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] procedure2.ProcedureExecutor(1029): Stored pid=88, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveB 2023-07-21 00:14:24,271 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=88, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 00:14:24,271 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveB" procId is: 88 2023-07-21 00:14:24,272 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=88 2023-07-21 00:14:24,275 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_490356625 2023-07-21 00:14:24,275 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:24,275 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:24,276 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 00:14:24,281 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=88, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 00:14:24,283 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/GrouptestMultiTableMoveB/f4c8a678fa79da8c47665a82cb3f4cc7 2023-07-21 00:14:24,283 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/GrouptestMultiTableMoveB/f4c8a678fa79da8c47665a82cb3f4cc7 empty. 2023-07-21 00:14:24,284 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/GrouptestMultiTableMoveB/f4c8a678fa79da8c47665a82cb3f4cc7 2023-07-21 00:14:24,284 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-21 00:14:24,311 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/GrouptestMultiTableMoveB/.tabledesc/.tableinfo.0000000001 2023-07-21 00:14:24,319 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(7675): creating {ENCODED => f4c8a678fa79da8c47665a82cb3f4cc7, NAME => 'GrouptestMultiTableMoveB,,1689898464268.f4c8a678fa79da8c47665a82cb3f4cc7.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp 2023-07-21 00:14:24,341 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689898464268.f4c8a678fa79da8c47665a82cb3f4cc7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:24,341 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1604): Closing f4c8a678fa79da8c47665a82cb3f4cc7, disabling compactions & flushes 2023-07-21 00:14:24,341 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689898464268.f4c8a678fa79da8c47665a82cb3f4cc7. 2023-07-21 00:14:24,341 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689898464268.f4c8a678fa79da8c47665a82cb3f4cc7. 2023-07-21 00:14:24,341 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689898464268.f4c8a678fa79da8c47665a82cb3f4cc7. after waiting 0 ms 2023-07-21 00:14:24,341 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689898464268.f4c8a678fa79da8c47665a82cb3f4cc7. 2023-07-21 00:14:24,341 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689898464268.f4c8a678fa79da8c47665a82cb3f4cc7. 2023-07-21 00:14:24,341 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1558): Region close journal for f4c8a678fa79da8c47665a82cb3f4cc7: 2023-07-21 00:14:24,344 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=88, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 00:14:24,345 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689898464268.f4c8a678fa79da8c47665a82cb3f4cc7.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689898464345"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898464345"}]},"ts":"1689898464345"} 2023-07-21 00:14:24,347 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 00:14:24,348 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=88, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 00:14:24,348 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689898464348"}]},"ts":"1689898464348"} 2023-07-21 00:14:24,349 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLING in hbase:meta 2023-07-21 00:14:24,353 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 00:14:24,354 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 00:14:24,354 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 00:14:24,354 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 00:14:24,354 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 00:14:24,354 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=89, ppid=88, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=f4c8a678fa79da8c47665a82cb3f4cc7, ASSIGN}] 2023-07-21 00:14:24,356 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=89, ppid=88, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=f4c8a678fa79da8c47665a82cb3f4cc7, ASSIGN 2023-07-21 00:14:24,357 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=89, ppid=88, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=f4c8a678fa79da8c47665a82cb3f4cc7, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43987,1689898455241; forceNewPlan=false, retain=false 2023-07-21 00:14:24,373 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=88 2023-07-21 00:14:24,507 INFO [jenkins-hbase4:33855] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 00:14:24,509 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=89 updating hbase:meta row=f4c8a678fa79da8c47665a82cb3f4cc7, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43987,1689898455241 2023-07-21 00:14:24,509 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689898464268.f4c8a678fa79da8c47665a82cb3f4cc7.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689898464509"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898464509"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898464509"}]},"ts":"1689898464509"} 2023-07-21 00:14:24,511 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=90, ppid=89, state=RUNNABLE; OpenRegionProcedure f4c8a678fa79da8c47665a82cb3f4cc7, server=jenkins-hbase4.apache.org,43987,1689898455241}] 2023-07-21 00:14:24,527 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-21 00:14:24,575 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=88 2023-07-21 00:14:24,668 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689898464268.f4c8a678fa79da8c47665a82cb3f4cc7. 2023-07-21 00:14:24,669 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f4c8a678fa79da8c47665a82cb3f4cc7, NAME => 'GrouptestMultiTableMoveB,,1689898464268.f4c8a678fa79da8c47665a82cb3f4cc7.', STARTKEY => '', ENDKEY => ''} 2023-07-21 00:14:24,669 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB f4c8a678fa79da8c47665a82cb3f4cc7 2023-07-21 00:14:24,669 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689898464268.f4c8a678fa79da8c47665a82cb3f4cc7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:24,669 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f4c8a678fa79da8c47665a82cb3f4cc7 2023-07-21 00:14:24,669 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f4c8a678fa79da8c47665a82cb3f4cc7 2023-07-21 00:14:24,672 INFO [StoreOpener-f4c8a678fa79da8c47665a82cb3f4cc7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f4c8a678fa79da8c47665a82cb3f4cc7 2023-07-21 00:14:24,675 DEBUG [StoreOpener-f4c8a678fa79da8c47665a82cb3f4cc7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/GrouptestMultiTableMoveB/f4c8a678fa79da8c47665a82cb3f4cc7/f 2023-07-21 00:14:24,675 DEBUG [StoreOpener-f4c8a678fa79da8c47665a82cb3f4cc7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/GrouptestMultiTableMoveB/f4c8a678fa79da8c47665a82cb3f4cc7/f 2023-07-21 00:14:24,675 INFO [StoreOpener-f4c8a678fa79da8c47665a82cb3f4cc7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f4c8a678fa79da8c47665a82cb3f4cc7 columnFamilyName f 2023-07-21 00:14:24,680 INFO [StoreOpener-f4c8a678fa79da8c47665a82cb3f4cc7-1] regionserver.HStore(310): Store=f4c8a678fa79da8c47665a82cb3f4cc7/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:24,681 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/GrouptestMultiTableMoveB/f4c8a678fa79da8c47665a82cb3f4cc7 2023-07-21 00:14:24,681 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/GrouptestMultiTableMoveB/f4c8a678fa79da8c47665a82cb3f4cc7 2023-07-21 00:14:24,686 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f4c8a678fa79da8c47665a82cb3f4cc7 2023-07-21 00:14:24,690 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/GrouptestMultiTableMoveB/f4c8a678fa79da8c47665a82cb3f4cc7/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 00:14:24,691 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f4c8a678fa79da8c47665a82cb3f4cc7; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11214486400, jitterRate=0.044430434703826904}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 00:14:24,691 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f4c8a678fa79da8c47665a82cb3f4cc7: 2023-07-21 00:14:24,692 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689898464268.f4c8a678fa79da8c47665a82cb3f4cc7., pid=90, masterSystemTime=1689898464663 2023-07-21 00:14:24,694 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689898464268.f4c8a678fa79da8c47665a82cb3f4cc7. 2023-07-21 00:14:24,694 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689898464268.f4c8a678fa79da8c47665a82cb3f4cc7. 2023-07-21 00:14:24,695 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=89 updating hbase:meta row=f4c8a678fa79da8c47665a82cb3f4cc7, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43987,1689898455241 2023-07-21 00:14:24,695 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689898464268.f4c8a678fa79da8c47665a82cb3f4cc7.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689898464695"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689898464695"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689898464695"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689898464695"}]},"ts":"1689898464695"} 2023-07-21 00:14:24,701 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=90, resume processing ppid=89 2023-07-21 00:14:24,701 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=90, ppid=89, state=SUCCESS; OpenRegionProcedure f4c8a678fa79da8c47665a82cb3f4cc7, server=jenkins-hbase4.apache.org,43987,1689898455241 in 187 msec 2023-07-21 00:14:24,705 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=89, resume processing ppid=88 2023-07-21 00:14:24,705 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=89, ppid=88, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=f4c8a678fa79da8c47665a82cb3f4cc7, ASSIGN in 347 msec 2023-07-21 00:14:24,706 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=88, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 00:14:24,706 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689898464706"}]},"ts":"1689898464706"} 2023-07-21 00:14:24,708 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLED in hbase:meta 2023-07-21 00:14:24,712 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=88, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 00:14:24,714 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=88, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveB in 444 msec 2023-07-21 00:14:24,876 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=88 2023-07-21 00:14:24,877 INFO [Listener at localhost/41495] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveB, procId: 88 completed 2023-07-21 00:14:24,877 DEBUG [Listener at localhost/41495] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveB get assigned. Timeout = 60000ms 2023-07-21 00:14:24,877 INFO [Listener at localhost/41495] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:24,882 INFO [Listener at localhost/41495] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveB assigned to meta. Checking AM states. 2023-07-21 00:14:24,882 INFO [Listener at localhost/41495] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:24,882 INFO [Listener at localhost/41495] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveB assigned. 2023-07-21 00:14:24,883 INFO [Listener at localhost/41495] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:24,898 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-21 00:14:24,898 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 00:14:24,899 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-21 00:14:24,900 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 00:14:24,900 INFO [Listener at localhost/41495] rsgroup.TestRSGroupsAdmin1(262): Moving table [GrouptestMultiTableMoveA,GrouptestMultiTableMoveB] to Group_testMultiTableMove_490356625 2023-07-21 00:14:24,903 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] to rsgroup Group_testMultiTableMove_490356625 2023-07-21 00:14:24,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_490356625 2023-07-21 00:14:24,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:24,907 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:24,907 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 00:14:24,910 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveB to RSGroup Group_testMultiTableMove_490356625 2023-07-21 00:14:24,910 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(345): Moving region f4c8a678fa79da8c47665a82cb3f4cc7 to RSGroup Group_testMultiTableMove_490356625 2023-07-21 00:14:24,911 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] procedure2.ProcedureExecutor(1029): Stored pid=91, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=f4c8a678fa79da8c47665a82cb3f4cc7, REOPEN/MOVE 2023-07-21 00:14:24,911 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveA to RSGroup Group_testMultiTableMove_490356625 2023-07-21 00:14:24,912 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(345): Moving region 0ef97b1a79059723a683adaa3270d7d1 to RSGroup Group_testMultiTableMove_490356625 2023-07-21 00:14:24,912 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=91, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=f4c8a678fa79da8c47665a82cb3f4cc7, REOPEN/MOVE 2023-07-21 00:14:24,913 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] procedure2.ProcedureExecutor(1029): Stored pid=92, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=0ef97b1a79059723a683adaa3270d7d1, REOPEN/MOVE 2023-07-21 00:14:24,913 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=91 updating hbase:meta row=f4c8a678fa79da8c47665a82cb3f4cc7, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43987,1689898455241 2023-07-21 00:14:24,913 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group Group_testMultiTableMove_490356625, current retry=0 2023-07-21 00:14:24,914 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=92, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=0ef97b1a79059723a683adaa3270d7d1, REOPEN/MOVE 2023-07-21 00:14:24,915 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689898464268.f4c8a678fa79da8c47665a82cb3f4cc7.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689898464913"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898464913"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898464913"}]},"ts":"1689898464913"} 2023-07-21 00:14:24,915 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=92 updating hbase:meta row=0ef97b1a79059723a683adaa3270d7d1, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42163,1689898450682 2023-07-21 00:14:24,916 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689898463648.0ef97b1a79059723a683adaa3270d7d1.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689898464915"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898464915"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898464915"}]},"ts":"1689898464915"} 2023-07-21 00:14:24,917 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=93, ppid=91, state=RUNNABLE; CloseRegionProcedure f4c8a678fa79da8c47665a82cb3f4cc7, server=jenkins-hbase4.apache.org,43987,1689898455241}] 2023-07-21 00:14:24,921 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=94, ppid=92, state=RUNNABLE; CloseRegionProcedure 0ef97b1a79059723a683adaa3270d7d1, server=jenkins-hbase4.apache.org,42163,1689898450682}] 2023-07-21 00:14:25,071 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f4c8a678fa79da8c47665a82cb3f4cc7 2023-07-21 00:14:25,073 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f4c8a678fa79da8c47665a82cb3f4cc7, disabling compactions & flushes 2023-07-21 00:14:25,073 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689898464268.f4c8a678fa79da8c47665a82cb3f4cc7. 2023-07-21 00:14:25,073 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689898464268.f4c8a678fa79da8c47665a82cb3f4cc7. 2023-07-21 00:14:25,073 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689898464268.f4c8a678fa79da8c47665a82cb3f4cc7. after waiting 0 ms 2023-07-21 00:14:25,073 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689898464268.f4c8a678fa79da8c47665a82cb3f4cc7. 2023-07-21 00:14:25,073 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 0ef97b1a79059723a683adaa3270d7d1 2023-07-21 00:14:25,076 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 0ef97b1a79059723a683adaa3270d7d1, disabling compactions & flushes 2023-07-21 00:14:25,076 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689898463648.0ef97b1a79059723a683adaa3270d7d1. 2023-07-21 00:14:25,076 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689898463648.0ef97b1a79059723a683adaa3270d7d1. 2023-07-21 00:14:25,076 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689898463648.0ef97b1a79059723a683adaa3270d7d1. after waiting 0 ms 2023-07-21 00:14:25,076 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689898463648.0ef97b1a79059723a683adaa3270d7d1. 2023-07-21 00:14:25,080 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/GrouptestMultiTableMoveB/f4c8a678fa79da8c47665a82cb3f4cc7/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 00:14:25,080 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/GrouptestMultiTableMoveA/0ef97b1a79059723a683adaa3270d7d1/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 00:14:25,080 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689898464268.f4c8a678fa79da8c47665a82cb3f4cc7. 2023-07-21 00:14:25,080 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f4c8a678fa79da8c47665a82cb3f4cc7: 2023-07-21 00:14:25,080 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding f4c8a678fa79da8c47665a82cb3f4cc7 move to jenkins-hbase4.apache.org,33545,1689898450890 record at close sequenceid=2 2023-07-21 00:14:25,081 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689898463648.0ef97b1a79059723a683adaa3270d7d1. 2023-07-21 00:14:25,081 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 0ef97b1a79059723a683adaa3270d7d1: 2023-07-21 00:14:25,081 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 0ef97b1a79059723a683adaa3270d7d1 move to jenkins-hbase4.apache.org,33545,1689898450890 record at close sequenceid=2 2023-07-21 00:14:25,083 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f4c8a678fa79da8c47665a82cb3f4cc7 2023-07-21 00:14:25,083 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=91 updating hbase:meta row=f4c8a678fa79da8c47665a82cb3f4cc7, regionState=CLOSED 2023-07-21 00:14:25,083 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689898464268.f4c8a678fa79da8c47665a82cb3f4cc7.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689898465083"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898465083"}]},"ts":"1689898465083"} 2023-07-21 00:14:25,083 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 0ef97b1a79059723a683adaa3270d7d1 2023-07-21 00:14:25,084 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=92 updating hbase:meta row=0ef97b1a79059723a683adaa3270d7d1, regionState=CLOSED 2023-07-21 00:14:25,084 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689898463648.0ef97b1a79059723a683adaa3270d7d1.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689898465084"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898465084"}]},"ts":"1689898465084"} 2023-07-21 00:14:25,087 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=93, resume processing ppid=91 2023-07-21 00:14:25,087 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=93, ppid=91, state=SUCCESS; CloseRegionProcedure f4c8a678fa79da8c47665a82cb3f4cc7, server=jenkins-hbase4.apache.org,43987,1689898455241 in 168 msec 2023-07-21 00:14:25,087 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=94, resume processing ppid=92 2023-07-21 00:14:25,087 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=91, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=f4c8a678fa79da8c47665a82cb3f4cc7, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33545,1689898450890; forceNewPlan=false, retain=false 2023-07-21 00:14:25,087 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=94, ppid=92, state=SUCCESS; CloseRegionProcedure 0ef97b1a79059723a683adaa3270d7d1, server=jenkins-hbase4.apache.org,42163,1689898450682 in 166 msec 2023-07-21 00:14:25,088 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=92, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=0ef97b1a79059723a683adaa3270d7d1, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33545,1689898450890; forceNewPlan=false, retain=false 2023-07-21 00:14:25,238 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=92 updating hbase:meta row=0ef97b1a79059723a683adaa3270d7d1, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33545,1689898450890 2023-07-21 00:14:25,238 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=91 updating hbase:meta row=f4c8a678fa79da8c47665a82cb3f4cc7, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33545,1689898450890 2023-07-21 00:14:25,238 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689898463648.0ef97b1a79059723a683adaa3270d7d1.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689898465238"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898465238"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898465238"}]},"ts":"1689898465238"} 2023-07-21 00:14:25,238 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689898464268.f4c8a678fa79da8c47665a82cb3f4cc7.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689898465238"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898465238"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898465238"}]},"ts":"1689898465238"} 2023-07-21 00:14:25,240 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=95, ppid=92, state=RUNNABLE; OpenRegionProcedure 0ef97b1a79059723a683adaa3270d7d1, server=jenkins-hbase4.apache.org,33545,1689898450890}] 2023-07-21 00:14:25,240 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=96, ppid=91, state=RUNNABLE; OpenRegionProcedure f4c8a678fa79da8c47665a82cb3f4cc7, server=jenkins-hbase4.apache.org,33545,1689898450890}] 2023-07-21 00:14:25,396 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689898463648.0ef97b1a79059723a683adaa3270d7d1. 2023-07-21 00:14:25,396 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0ef97b1a79059723a683adaa3270d7d1, NAME => 'GrouptestMultiTableMoveA,,1689898463648.0ef97b1a79059723a683adaa3270d7d1.', STARTKEY => '', ENDKEY => ''} 2023-07-21 00:14:25,396 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 0ef97b1a79059723a683adaa3270d7d1 2023-07-21 00:14:25,396 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689898463648.0ef97b1a79059723a683adaa3270d7d1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:25,397 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 0ef97b1a79059723a683adaa3270d7d1 2023-07-21 00:14:25,397 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 0ef97b1a79059723a683adaa3270d7d1 2023-07-21 00:14:25,398 INFO [StoreOpener-0ef97b1a79059723a683adaa3270d7d1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 0ef97b1a79059723a683adaa3270d7d1 2023-07-21 00:14:25,399 DEBUG [StoreOpener-0ef97b1a79059723a683adaa3270d7d1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/GrouptestMultiTableMoveA/0ef97b1a79059723a683adaa3270d7d1/f 2023-07-21 00:14:25,399 DEBUG [StoreOpener-0ef97b1a79059723a683adaa3270d7d1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/GrouptestMultiTableMoveA/0ef97b1a79059723a683adaa3270d7d1/f 2023-07-21 00:14:25,400 INFO [StoreOpener-0ef97b1a79059723a683adaa3270d7d1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0ef97b1a79059723a683adaa3270d7d1 columnFamilyName f 2023-07-21 00:14:25,401 INFO [StoreOpener-0ef97b1a79059723a683adaa3270d7d1-1] regionserver.HStore(310): Store=0ef97b1a79059723a683adaa3270d7d1/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:25,401 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/GrouptestMultiTableMoveA/0ef97b1a79059723a683adaa3270d7d1 2023-07-21 00:14:25,403 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/GrouptestMultiTableMoveA/0ef97b1a79059723a683adaa3270d7d1 2023-07-21 00:14:25,406 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 0ef97b1a79059723a683adaa3270d7d1 2023-07-21 00:14:25,407 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 0ef97b1a79059723a683adaa3270d7d1; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9645378560, jitterRate=-0.10170412063598633}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 00:14:25,407 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 0ef97b1a79059723a683adaa3270d7d1: 2023-07-21 00:14:25,407 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689898463648.0ef97b1a79059723a683adaa3270d7d1., pid=95, masterSystemTime=1689898465391 2023-07-21 00:14:25,409 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689898463648.0ef97b1a79059723a683adaa3270d7d1. 2023-07-21 00:14:25,409 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689898463648.0ef97b1a79059723a683adaa3270d7d1. 2023-07-21 00:14:25,409 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689898464268.f4c8a678fa79da8c47665a82cb3f4cc7. 2023-07-21 00:14:25,409 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f4c8a678fa79da8c47665a82cb3f4cc7, NAME => 'GrouptestMultiTableMoveB,,1689898464268.f4c8a678fa79da8c47665a82cb3f4cc7.', STARTKEY => '', ENDKEY => ''} 2023-07-21 00:14:25,410 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=92 updating hbase:meta row=0ef97b1a79059723a683adaa3270d7d1, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33545,1689898450890 2023-07-21 00:14:25,410 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB f4c8a678fa79da8c47665a82cb3f4cc7 2023-07-21 00:14:25,410 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689898463648.0ef97b1a79059723a683adaa3270d7d1.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689898465409"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689898465409"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689898465409"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689898465409"}]},"ts":"1689898465409"} 2023-07-21 00:14:25,410 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689898464268.f4c8a678fa79da8c47665a82cb3f4cc7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:25,410 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f4c8a678fa79da8c47665a82cb3f4cc7 2023-07-21 00:14:25,410 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f4c8a678fa79da8c47665a82cb3f4cc7 2023-07-21 00:14:25,412 INFO [StoreOpener-f4c8a678fa79da8c47665a82cb3f4cc7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f4c8a678fa79da8c47665a82cb3f4cc7 2023-07-21 00:14:25,413 DEBUG [StoreOpener-f4c8a678fa79da8c47665a82cb3f4cc7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/GrouptestMultiTableMoveB/f4c8a678fa79da8c47665a82cb3f4cc7/f 2023-07-21 00:14:25,413 DEBUG [StoreOpener-f4c8a678fa79da8c47665a82cb3f4cc7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/GrouptestMultiTableMoveB/f4c8a678fa79da8c47665a82cb3f4cc7/f 2023-07-21 00:14:25,413 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=95, resume processing ppid=92 2023-07-21 00:14:25,413 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=95, ppid=92, state=SUCCESS; OpenRegionProcedure 0ef97b1a79059723a683adaa3270d7d1, server=jenkins-hbase4.apache.org,33545,1689898450890 in 171 msec 2023-07-21 00:14:25,414 INFO [StoreOpener-f4c8a678fa79da8c47665a82cb3f4cc7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f4c8a678fa79da8c47665a82cb3f4cc7 columnFamilyName f 2023-07-21 00:14:25,414 INFO [StoreOpener-f4c8a678fa79da8c47665a82cb3f4cc7-1] regionserver.HStore(310): Store=f4c8a678fa79da8c47665a82cb3f4cc7/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:25,415 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=92, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=0ef97b1a79059723a683adaa3270d7d1, REOPEN/MOVE in 500 msec 2023-07-21 00:14:25,415 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/GrouptestMultiTableMoveB/f4c8a678fa79da8c47665a82cb3f4cc7 2023-07-21 00:14:25,417 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/GrouptestMultiTableMoveB/f4c8a678fa79da8c47665a82cb3f4cc7 2023-07-21 00:14:25,421 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f4c8a678fa79da8c47665a82cb3f4cc7 2023-07-21 00:14:25,422 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f4c8a678fa79da8c47665a82cb3f4cc7; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11194810240, jitterRate=0.042597949504852295}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 00:14:25,422 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f4c8a678fa79da8c47665a82cb3f4cc7: 2023-07-21 00:14:25,423 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689898464268.f4c8a678fa79da8c47665a82cb3f4cc7., pid=96, masterSystemTime=1689898465391 2023-07-21 00:14:25,424 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689898464268.f4c8a678fa79da8c47665a82cb3f4cc7. 2023-07-21 00:14:25,424 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689898464268.f4c8a678fa79da8c47665a82cb3f4cc7. 2023-07-21 00:14:25,425 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=91 updating hbase:meta row=f4c8a678fa79da8c47665a82cb3f4cc7, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33545,1689898450890 2023-07-21 00:14:25,425 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689898464268.f4c8a678fa79da8c47665a82cb3f4cc7.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689898465425"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689898465425"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689898465425"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689898465425"}]},"ts":"1689898465425"} 2023-07-21 00:14:25,428 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=96, resume processing ppid=91 2023-07-21 00:14:25,428 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=96, ppid=91, state=SUCCESS; OpenRegionProcedure f4c8a678fa79da8c47665a82cb3f4cc7, server=jenkins-hbase4.apache.org,33545,1689898450890 in 186 msec 2023-07-21 00:14:25,429 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=91, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=f4c8a678fa79da8c47665a82cb3f4cc7, REOPEN/MOVE in 518 msec 2023-07-21 00:14:25,915 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] procedure.ProcedureSyncWait(216): waitFor pid=91 2023-07-21 00:14:25,915 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(369): All regions from table(s) [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] moved to target group Group_testMultiTableMove_490356625. 2023-07-21 00:14:25,915 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 00:14:25,918 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:25,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:25,922 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-21 00:14:25,922 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 00:14:25,923 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-21 00:14:25,923 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 00:14:25,925 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 00:14:25,925 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:25,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_490356625 2023-07-21 00:14:25,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:25,928 INFO [Listener at localhost/41495] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveA 2023-07-21 00:14:25,929 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveA 2023-07-21 00:14:25,932 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] procedure2.ProcedureExecutor(1029): Stored pid=97, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveA 2023-07-21 00:14:25,935 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-21 00:14:25,935 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689898465935"}]},"ts":"1689898465935"} 2023-07-21 00:14:25,939 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLING in hbase:meta 2023-07-21 00:14:25,941 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveA to state=DISABLING 2023-07-21 00:14:25,942 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=0ef97b1a79059723a683adaa3270d7d1, UNASSIGN}] 2023-07-21 00:14:25,944 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=0ef97b1a79059723a683adaa3270d7d1, UNASSIGN 2023-07-21 00:14:25,946 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=0ef97b1a79059723a683adaa3270d7d1, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33545,1689898450890 2023-07-21 00:14:25,946 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689898463648.0ef97b1a79059723a683adaa3270d7d1.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689898465946"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898465946"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898465946"}]},"ts":"1689898465946"} 2023-07-21 00:14:25,948 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=99, ppid=98, state=RUNNABLE; CloseRegionProcedure 0ef97b1a79059723a683adaa3270d7d1, server=jenkins-hbase4.apache.org,33545,1689898450890}] 2023-07-21 00:14:26,036 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-21 00:14:26,100 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 0ef97b1a79059723a683adaa3270d7d1 2023-07-21 00:14:26,102 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 0ef97b1a79059723a683adaa3270d7d1, disabling compactions & flushes 2023-07-21 00:14:26,102 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689898463648.0ef97b1a79059723a683adaa3270d7d1. 2023-07-21 00:14:26,102 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689898463648.0ef97b1a79059723a683adaa3270d7d1. 2023-07-21 00:14:26,102 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689898463648.0ef97b1a79059723a683adaa3270d7d1. after waiting 0 ms 2023-07-21 00:14:26,102 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689898463648.0ef97b1a79059723a683adaa3270d7d1. 2023-07-21 00:14:26,107 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/GrouptestMultiTableMoveA/0ef97b1a79059723a683adaa3270d7d1/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 00:14:26,108 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689898463648.0ef97b1a79059723a683adaa3270d7d1. 2023-07-21 00:14:26,108 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 0ef97b1a79059723a683adaa3270d7d1: 2023-07-21 00:14:26,110 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 0ef97b1a79059723a683adaa3270d7d1 2023-07-21 00:14:26,111 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=0ef97b1a79059723a683adaa3270d7d1, regionState=CLOSED 2023-07-21 00:14:26,111 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689898463648.0ef97b1a79059723a683adaa3270d7d1.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689898466111"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898466111"}]},"ts":"1689898466111"} 2023-07-21 00:14:26,114 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=99, resume processing ppid=98 2023-07-21 00:14:26,115 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=99, ppid=98, state=SUCCESS; CloseRegionProcedure 0ef97b1a79059723a683adaa3270d7d1, server=jenkins-hbase4.apache.org,33545,1689898450890 in 164 msec 2023-07-21 00:14:26,117 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=98, resume processing ppid=97 2023-07-21 00:14:26,117 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=98, ppid=97, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=0ef97b1a79059723a683adaa3270d7d1, UNASSIGN in 173 msec 2023-07-21 00:14:26,118 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689898466118"}]},"ts":"1689898466118"} 2023-07-21 00:14:26,122 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLED in hbase:meta 2023-07-21 00:14:26,123 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveA to state=DISABLED 2023-07-21 00:14:26,125 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=97, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveA in 195 msec 2023-07-21 00:14:26,237 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-21 00:14:26,238 INFO [Listener at localhost/41495] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveA, procId: 97 completed 2023-07-21 00:14:26,239 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveA 2023-07-21 00:14:26,240 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] procedure2.ProcedureExecutor(1029): Stored pid=100, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-21 00:14:26,242 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=100, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-21 00:14:26,242 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveA' from rsgroup 'Group_testMultiTableMove_490356625' 2023-07-21 00:14:26,243 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=100, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-21 00:14:26,245 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_490356625 2023-07-21 00:14:26,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:26,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:26,249 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 00:14:26,249 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/GrouptestMultiTableMoveA/0ef97b1a79059723a683adaa3270d7d1 2023-07-21 00:14:26,251 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/GrouptestMultiTableMoveA/0ef97b1a79059723a683adaa3270d7d1/f, FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/GrouptestMultiTableMoveA/0ef97b1a79059723a683adaa3270d7d1/recovered.edits] 2023-07-21 00:14:26,252 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-21 00:14:26,258 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/GrouptestMultiTableMoveA/0ef97b1a79059723a683adaa3270d7d1/recovered.edits/7.seqid to hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/archive/data/default/GrouptestMultiTableMoveA/0ef97b1a79059723a683adaa3270d7d1/recovered.edits/7.seqid 2023-07-21 00:14:26,259 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/GrouptestMultiTableMoveA/0ef97b1a79059723a683adaa3270d7d1 2023-07-21 00:14:26,259 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-21 00:14:26,262 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=100, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-21 00:14:26,264 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveA from hbase:meta 2023-07-21 00:14:26,265 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveA' descriptor. 2023-07-21 00:14:26,267 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=100, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-21 00:14:26,267 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveA' from region states. 2023-07-21 00:14:26,267 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA,,1689898463648.0ef97b1a79059723a683adaa3270d7d1.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689898466267"}]},"ts":"9223372036854775807"} 2023-07-21 00:14:26,269 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-21 00:14:26,269 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 0ef97b1a79059723a683adaa3270d7d1, NAME => 'GrouptestMultiTableMoveA,,1689898463648.0ef97b1a79059723a683adaa3270d7d1.', STARTKEY => '', ENDKEY => ''}] 2023-07-21 00:14:26,269 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveA' as deleted. 2023-07-21 00:14:26,269 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689898466269"}]},"ts":"9223372036854775807"} 2023-07-21 00:14:26,272 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveA state from META 2023-07-21 00:14:26,274 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=100, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-21 00:14:26,278 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=100, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveA in 35 msec 2023-07-21 00:14:26,353 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-21 00:14:26,354 INFO [Listener at localhost/41495] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveA, procId: 100 completed 2023-07-21 00:14:26,355 INFO [Listener at localhost/41495] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveB 2023-07-21 00:14:26,355 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveB 2023-07-21 00:14:26,357 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] procedure2.ProcedureExecutor(1029): Stored pid=101, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveB 2023-07-21 00:14:26,361 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=101 2023-07-21 00:14:26,363 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689898466363"}]},"ts":"1689898466363"} 2023-07-21 00:14:26,365 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLING in hbase:meta 2023-07-21 00:14:26,367 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveB to state=DISABLING 2023-07-21 00:14:26,368 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=101, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=f4c8a678fa79da8c47665a82cb3f4cc7, UNASSIGN}] 2023-07-21 00:14:26,370 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=102, ppid=101, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=f4c8a678fa79da8c47665a82cb3f4cc7, UNASSIGN 2023-07-21 00:14:26,371 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=102 updating hbase:meta row=f4c8a678fa79da8c47665a82cb3f4cc7, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33545,1689898450890 2023-07-21 00:14:26,371 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689898464268.f4c8a678fa79da8c47665a82cb3f4cc7.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689898466371"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898466371"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898466371"}]},"ts":"1689898466371"} 2023-07-21 00:14:26,375 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=103, ppid=102, state=RUNNABLE; CloseRegionProcedure f4c8a678fa79da8c47665a82cb3f4cc7, server=jenkins-hbase4.apache.org,33545,1689898450890}] 2023-07-21 00:14:26,463 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=101 2023-07-21 00:14:26,528 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f4c8a678fa79da8c47665a82cb3f4cc7 2023-07-21 00:14:26,529 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f4c8a678fa79da8c47665a82cb3f4cc7, disabling compactions & flushes 2023-07-21 00:14:26,529 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689898464268.f4c8a678fa79da8c47665a82cb3f4cc7. 2023-07-21 00:14:26,529 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689898464268.f4c8a678fa79da8c47665a82cb3f4cc7. 2023-07-21 00:14:26,529 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689898464268.f4c8a678fa79da8c47665a82cb3f4cc7. after waiting 0 ms 2023-07-21 00:14:26,530 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689898464268.f4c8a678fa79da8c47665a82cb3f4cc7. 2023-07-21 00:14:26,535 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/GrouptestMultiTableMoveB/f4c8a678fa79da8c47665a82cb3f4cc7/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 00:14:26,536 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689898464268.f4c8a678fa79da8c47665a82cb3f4cc7. 2023-07-21 00:14:26,536 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f4c8a678fa79da8c47665a82cb3f4cc7: 2023-07-21 00:14:26,539 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f4c8a678fa79da8c47665a82cb3f4cc7 2023-07-21 00:14:26,539 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=102 updating hbase:meta row=f4c8a678fa79da8c47665a82cb3f4cc7, regionState=CLOSED 2023-07-21 00:14:26,540 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689898464268.f4c8a678fa79da8c47665a82cb3f4cc7.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689898466539"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898466539"}]},"ts":"1689898466539"} 2023-07-21 00:14:26,546 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=103, resume processing ppid=102 2023-07-21 00:14:26,546 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=103, ppid=102, state=SUCCESS; CloseRegionProcedure f4c8a678fa79da8c47665a82cb3f4cc7, server=jenkins-hbase4.apache.org,33545,1689898450890 in 167 msec 2023-07-21 00:14:26,548 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=101 2023-07-21 00:14:26,549 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=101, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=f4c8a678fa79da8c47665a82cb3f4cc7, UNASSIGN in 178 msec 2023-07-21 00:14:26,550 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689898466549"}]},"ts":"1689898466549"} 2023-07-21 00:14:26,552 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLED in hbase:meta 2023-07-21 00:14:26,555 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveB to state=DISABLED 2023-07-21 00:14:26,558 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=101, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveB in 201 msec 2023-07-21 00:14:26,664 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=101 2023-07-21 00:14:26,664 INFO [Listener at localhost/41495] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveB, procId: 101 completed 2023-07-21 00:14:26,665 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveB 2023-07-21 00:14:26,666 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] procedure2.ProcedureExecutor(1029): Stored pid=104, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-21 00:14:26,668 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=104, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-21 00:14:26,668 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveB' from rsgroup 'Group_testMultiTableMove_490356625' 2023-07-21 00:14:26,669 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=104, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-21 00:14:26,670 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_490356625 2023-07-21 00:14:26,671 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:26,671 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:26,672 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 00:14:26,673 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/GrouptestMultiTableMoveB/f4c8a678fa79da8c47665a82cb3f4cc7 2023-07-21 00:14:26,674 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=104 2023-07-21 00:14:26,675 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/GrouptestMultiTableMoveB/f4c8a678fa79da8c47665a82cb3f4cc7/f, FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/GrouptestMultiTableMoveB/f4c8a678fa79da8c47665a82cb3f4cc7/recovered.edits] 2023-07-21 00:14:26,680 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/GrouptestMultiTableMoveB/f4c8a678fa79da8c47665a82cb3f4cc7/recovered.edits/7.seqid to hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/archive/data/default/GrouptestMultiTableMoveB/f4c8a678fa79da8c47665a82cb3f4cc7/recovered.edits/7.seqid 2023-07-21 00:14:26,681 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/GrouptestMultiTableMoveB/f4c8a678fa79da8c47665a82cb3f4cc7 2023-07-21 00:14:26,681 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-21 00:14:26,684 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=104, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-21 00:14:26,686 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveB from hbase:meta 2023-07-21 00:14:26,688 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveB' descriptor. 2023-07-21 00:14:26,689 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=104, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-21 00:14:26,689 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveB' from region states. 2023-07-21 00:14:26,689 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB,,1689898464268.f4c8a678fa79da8c47665a82cb3f4cc7.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689898466689"}]},"ts":"9223372036854775807"} 2023-07-21 00:14:26,691 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-21 00:14:26,691 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => f4c8a678fa79da8c47665a82cb3f4cc7, NAME => 'GrouptestMultiTableMoveB,,1689898464268.f4c8a678fa79da8c47665a82cb3f4cc7.', STARTKEY => '', ENDKEY => ''}] 2023-07-21 00:14:26,691 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveB' as deleted. 2023-07-21 00:14:26,691 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689898466691"}]},"ts":"9223372036854775807"} 2023-07-21 00:14:26,692 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveB state from META 2023-07-21 00:14:26,697 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=104, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-21 00:14:26,698 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=104, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveB in 32 msec 2023-07-21 00:14:26,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=104 2023-07-21 00:14:26,776 INFO [Listener at localhost/41495] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveB, procId: 104 completed 2023-07-21 00:14:26,779 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:26,779 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:26,780 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 00:14:26,780 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 00:14:26,780 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 00:14:26,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33545] to rsgroup default 2023-07-21 00:14:26,783 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_490356625 2023-07-21 00:14:26,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:26,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:26,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 00:14:26,787 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testMultiTableMove_490356625, current retry=0 2023-07-21 00:14:26,787 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33545,1689898450890] are moved back to Group_testMultiTableMove_490356625 2023-07-21 00:14:26,787 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testMultiTableMove_490356625 => default 2023-07-21 00:14:26,787 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 00:14:26,788 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testMultiTableMove_490356625 2023-07-21 00:14:26,791 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:26,792 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:26,792 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 00:14:26,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 00:14:26,795 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 00:14:26,795 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 00:14:26,795 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 00:14:26,796 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 00:14:26,796 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 00:14:26,797 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 00:14:26,800 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:26,801 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 00:14:26,802 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 00:14:26,805 INFO [Listener at localhost/41495] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 00:14:26,806 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 00:14:26,808 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:26,808 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:26,810 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 00:14:26,812 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 00:14:26,814 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:26,815 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:26,816 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33855] to rsgroup master 2023-07-21 00:14:26,817 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 00:14:26,817 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.CallRunner(144): callId: 507 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:44654 deadline: 1689899666816, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. 2023-07-21 00:14:26,817 WARN [Listener at localhost/41495] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 00:14:26,819 INFO [Listener at localhost/41495] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:26,820 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:26,820 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:26,820 INFO [Listener at localhost/41495] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33545, jenkins-hbase4.apache.org:42163, jenkins-hbase4.apache.org:43987, jenkins-hbase4.apache.org:46101], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 00:14:26,821 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 00:14:26,821 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:26,841 INFO [Listener at localhost/41495] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=498 (was 498), OpenFileDescriptor=760 (was 769), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=537 (was 523) - SystemLoadAverage LEAK? -, ProcessCount=177 (was 177), AvailableMemoryMB=2839 (was 2910) 2023-07-21 00:14:26,864 INFO [Listener at localhost/41495] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=498, OpenFileDescriptor=760, MaxFileDescriptor=60000, SystemLoadAverage=537, ProcessCount=177, AvailableMemoryMB=2838 2023-07-21 00:14:26,864 INFO [Listener at localhost/41495] rsgroup.TestRSGroupsBase(132): testRenameRSGroupConstraints 2023-07-21 00:14:26,869 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:26,869 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:26,870 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 00:14:26,870 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 00:14:26,870 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 00:14:26,871 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 00:14:26,871 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 00:14:26,872 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 00:14:26,880 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:26,881 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 00:14:26,882 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 00:14:26,888 INFO [Listener at localhost/41495] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 00:14:26,890 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 00:14:26,895 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:26,895 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:26,900 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 00:14:26,901 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 00:14:26,910 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:26,911 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:26,914 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33855] to rsgroup master 2023-07-21 00:14:26,914 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 00:14:26,914 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.CallRunner(144): callId: 535 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:44654 deadline: 1689899666913, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. 2023-07-21 00:14:26,919 WARN [Listener at localhost/41495] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 00:14:26,923 INFO [Listener at localhost/41495] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:26,924 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:26,925 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:26,925 INFO [Listener at localhost/41495] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33545, jenkins-hbase4.apache.org:42163, jenkins-hbase4.apache.org:43987, jenkins-hbase4.apache.org:46101], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 00:14:26,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 00:14:26,927 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:26,935 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 00:14:26,935 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:26,937 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldGroup 2023-07-21 00:14:26,940 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:26,941 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-21 00:14:26,944 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:26,944 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 00:14:26,948 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 00:14:26,952 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:26,952 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:26,955 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42163, jenkins-hbase4.apache.org:33545] to rsgroup oldGroup 2023-07-21 00:14:26,959 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:26,960 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-21 00:14:26,960 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:26,961 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 00:14:26,962 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-21 00:14:26,962 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33545,1689898450890, jenkins-hbase4.apache.org,42163,1689898450682] are moved back to default 2023-07-21 00:14:26,962 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldGroup 2023-07-21 00:14:26,963 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 00:14:26,966 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:26,966 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:26,969 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-21 00:14:26,969 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:26,970 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-21 00:14:26,970 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:26,971 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 00:14:26,971 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:26,972 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup anotherRSGroup 2023-07-21 00:14:26,974 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:26,975 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-21 00:14:26,978 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-21 00:14:26,978 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:26,979 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 00:14:26,980 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 00:14:26,985 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:26,985 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:26,989 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43987] to rsgroup anotherRSGroup 2023-07-21 00:14:26,991 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:26,992 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-21 00:14:26,992 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-21 00:14:26,993 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:26,993 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 00:14:26,995 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-21 00:14:26,995 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,43987,1689898455241] are moved back to default 2023-07-21 00:14:26,995 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(438): Move servers done: default => anotherRSGroup 2023-07-21 00:14:26,995 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 00:14:26,998 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:26,999 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:27,006 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-21 00:14:27,006 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:27,007 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-21 00:14:27,007 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:27,014 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from nonExistingRSGroup to newRSGroup1 2023-07-21 00:14:27,014 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:407) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 00:14:27,014 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.CallRunner(144): callId: 569 service: MasterService methodName: ExecMasterService size: 113 connection: 172.31.14.131:44654 deadline: 1689899667013, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist 2023-07-21 00:14:27,016 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to anotherRSGroup 2023-07-21 00:14:27,016 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 00:14:27,016 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.CallRunner(144): callId: 571 service: MasterService methodName: ExecMasterService size: 106 connection: 172.31.14.131:44654 deadline: 1689899667016, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup 2023-07-21 00:14:27,017 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from default to newRSGroup2 2023-07-21 00:14:27,018 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:403) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 00:14:27,018 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.CallRunner(144): callId: 573 service: MasterService methodName: ExecMasterService size: 102 connection: 172.31.14.131:44654 deadline: 1689899667017, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup 2023-07-21 00:14:27,019 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to default 2023-07-21 00:14:27,019 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 00:14:27,019 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.CallRunner(144): callId: 575 service: MasterService methodName: ExecMasterService size: 99 connection: 172.31.14.131:44654 deadline: 1689899667019, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default 2023-07-21 00:14:27,023 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:27,024 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:27,025 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 00:14:27,025 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 00:14:27,025 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 00:14:27,026 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43987] to rsgroup default 2023-07-21 00:14:27,029 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:27,030 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-21 00:14:27,030 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-21 00:14:27,031 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:27,031 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 00:14:27,035 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group anotherRSGroup, current retry=0 2023-07-21 00:14:27,035 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,43987,1689898455241] are moved back to anotherRSGroup 2023-07-21 00:14:27,035 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(438): Move servers done: anotherRSGroup => default 2023-07-21 00:14:27,035 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 00:14:27,036 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup anotherRSGroup 2023-07-21 00:14:27,040 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:27,041 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-21 00:14:27,041 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:27,041 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-21 00:14:27,043 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 00:14:27,044 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 00:14:27,044 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 00:14:27,044 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 00:14:27,045 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42163, jenkins-hbase4.apache.org:33545] to rsgroup default 2023-07-21 00:14:27,048 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:27,048 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-21 00:14:27,049 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:27,049 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 00:14:27,054 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group oldGroup, current retry=0 2023-07-21 00:14:27,054 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33545,1689898450890, jenkins-hbase4.apache.org,42163,1689898450682] are moved back to oldGroup 2023-07-21 00:14:27,054 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(438): Move servers done: oldGroup => default 2023-07-21 00:14:27,054 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 00:14:27,055 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup oldGroup 2023-07-21 00:14:27,062 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:27,062 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:27,063 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 00:14:27,066 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 00:14:27,067 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 00:14:27,067 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 00:14:27,067 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 00:14:27,068 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 00:14:27,068 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 00:14:27,069 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 00:14:27,072 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:27,073 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 00:14:27,074 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 00:14:27,077 INFO [Listener at localhost/41495] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 00:14:27,078 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 00:14:27,083 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:27,084 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:27,086 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 00:14:27,088 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 00:14:27,092 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:27,092 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:27,094 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33855] to rsgroup master 2023-07-21 00:14:27,095 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 00:14:27,095 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.CallRunner(144): callId: 611 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:44654 deadline: 1689899667094, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. 2023-07-21 00:14:27,095 WARN [Listener at localhost/41495] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 00:14:27,097 INFO [Listener at localhost/41495] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:27,098 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:27,098 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:27,099 INFO [Listener at localhost/41495] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33545, jenkins-hbase4.apache.org:42163, jenkins-hbase4.apache.org:43987, jenkins-hbase4.apache.org:46101], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 00:14:27,099 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 00:14:27,100 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:27,126 INFO [Listener at localhost/41495] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=502 (was 498) Potentially hanging thread: hconnection-0x5a6bf6db-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5a6bf6db-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5a6bf6db-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5a6bf6db-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=760 (was 760), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=537 (was 537), ProcessCount=177 (was 177), AvailableMemoryMB=2820 (was 2838) 2023-07-21 00:14:27,126 WARN [Listener at localhost/41495] hbase.ResourceChecker(130): Thread=502 is superior to 500 2023-07-21 00:14:27,150 INFO [Listener at localhost/41495] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=502, OpenFileDescriptor=760, MaxFileDescriptor=60000, SystemLoadAverage=537, ProcessCount=177, AvailableMemoryMB=2816 2023-07-21 00:14:27,150 WARN [Listener at localhost/41495] hbase.ResourceChecker(130): Thread=502 is superior to 500 2023-07-21 00:14:27,150 INFO [Listener at localhost/41495] rsgroup.TestRSGroupsBase(132): testRenameRSGroup 2023-07-21 00:14:27,155 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:27,155 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:27,157 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 00:14:27,157 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 00:14:27,157 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 00:14:27,158 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 00:14:27,158 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 00:14:27,159 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 00:14:27,164 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:27,166 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 00:14:27,170 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 00:14:27,174 INFO [Listener at localhost/41495] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 00:14:27,175 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 00:14:27,178 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:27,178 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:27,180 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 00:14:27,182 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 00:14:27,185 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:27,186 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:27,189 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33855] to rsgroup master 2023-07-21 00:14:27,189 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 00:14:27,189 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.CallRunner(144): callId: 639 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:44654 deadline: 1689899667189, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. 2023-07-21 00:14:27,190 WARN [Listener at localhost/41495] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 00:14:27,192 INFO [Listener at localhost/41495] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:27,193 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:27,193 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:27,193 INFO [Listener at localhost/41495] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33545, jenkins-hbase4.apache.org:42163, jenkins-hbase4.apache.org:43987, jenkins-hbase4.apache.org:46101], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 00:14:27,194 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 00:14:27,194 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:27,195 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 00:14:27,195 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:27,196 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldgroup 2023-07-21 00:14:27,199 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-21 00:14:27,206 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:27,207 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:27,207 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 00:14:27,210 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 00:14:27,213 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:27,213 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:27,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42163, jenkins-hbase4.apache.org:33545] to rsgroup oldgroup 2023-07-21 00:14:27,219 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-21 00:14:27,219 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:27,220 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:27,220 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 00:14:27,228 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-21 00:14:27,228 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33545,1689898450890, jenkins-hbase4.apache.org,42163,1689898450682] are moved back to default 2023-07-21 00:14:27,228 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldgroup 2023-07-21 00:14:27,228 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 00:14:27,231 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:27,231 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:27,233 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-21 00:14:27,233 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:27,235 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 00:14:27,236 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] procedure2.ProcedureExecutor(1029): Stored pid=105, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=testRename 2023-07-21 00:14:27,238 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=105, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 00:14:27,238 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "testRename" procId is: 105 2023-07-21 00:14:27,239 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=105 2023-07-21 00:14:27,240 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-21 00:14:27,241 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:27,241 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:27,241 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 00:14:27,244 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=105, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 00:14:27,246 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/testRename/914656f487f4e9f20596fe69207baae9 2023-07-21 00:14:27,246 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/testRename/914656f487f4e9f20596fe69207baae9 empty. 2023-07-21 00:14:27,247 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/testRename/914656f487f4e9f20596fe69207baae9 2023-07-21 00:14:27,247 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived testRename regions 2023-07-21 00:14:27,263 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/testRename/.tabledesc/.tableinfo.0000000001 2023-07-21 00:14:27,265 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(7675): creating {ENCODED => 914656f487f4e9f20596fe69207baae9, NAME => 'testRename,,1689898467235.914656f487f4e9f20596fe69207baae9.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp 2023-07-21 00:14:27,278 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(866): Instantiated testRename,,1689898467235.914656f487f4e9f20596fe69207baae9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:27,278 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1604): Closing 914656f487f4e9f20596fe69207baae9, disabling compactions & flushes 2023-07-21 00:14:27,278 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1626): Closing region testRename,,1689898467235.914656f487f4e9f20596fe69207baae9. 2023-07-21 00:14:27,278 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689898467235.914656f487f4e9f20596fe69207baae9. 2023-07-21 00:14:27,278 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689898467235.914656f487f4e9f20596fe69207baae9. after waiting 0 ms 2023-07-21 00:14:27,278 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689898467235.914656f487f4e9f20596fe69207baae9. 2023-07-21 00:14:27,278 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1838): Closed testRename,,1689898467235.914656f487f4e9f20596fe69207baae9. 2023-07-21 00:14:27,278 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1558): Region close journal for 914656f487f4e9f20596fe69207baae9: 2023-07-21 00:14:27,281 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=105, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 00:14:27,282 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"testRename,,1689898467235.914656f487f4e9f20596fe69207baae9.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689898467282"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898467282"}]},"ts":"1689898467282"} 2023-07-21 00:14:27,284 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 00:14:27,285 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=105, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 00:14:27,286 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689898467285"}]},"ts":"1689898467285"} 2023-07-21 00:14:27,287 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLING in hbase:meta 2023-07-21 00:14:27,292 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 00:14:27,292 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 00:14:27,292 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 00:14:27,292 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 00:14:27,293 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=106, ppid=105, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=914656f487f4e9f20596fe69207baae9, ASSIGN}] 2023-07-21 00:14:27,296 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=106, ppid=105, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=914656f487f4e9f20596fe69207baae9, ASSIGN 2023-07-21 00:14:27,297 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=106, ppid=105, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=914656f487f4e9f20596fe69207baae9, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43987,1689898455241; forceNewPlan=false, retain=false 2023-07-21 00:14:27,340 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=105 2023-07-21 00:14:27,447 INFO [jenkins-hbase4:33855] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 00:14:27,449 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=106 updating hbase:meta row=914656f487f4e9f20596fe69207baae9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43987,1689898455241 2023-07-21 00:14:27,449 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689898467235.914656f487f4e9f20596fe69207baae9.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689898467448"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898467448"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898467448"}]},"ts":"1689898467448"} 2023-07-21 00:14:27,451 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=107, ppid=106, state=RUNNABLE; OpenRegionProcedure 914656f487f4e9f20596fe69207baae9, server=jenkins-hbase4.apache.org,43987,1689898455241}] 2023-07-21 00:14:27,541 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=105 2023-07-21 00:14:27,607 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689898467235.914656f487f4e9f20596fe69207baae9. 2023-07-21 00:14:27,607 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 914656f487f4e9f20596fe69207baae9, NAME => 'testRename,,1689898467235.914656f487f4e9f20596fe69207baae9.', STARTKEY => '', ENDKEY => ''} 2023-07-21 00:14:27,608 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 914656f487f4e9f20596fe69207baae9 2023-07-21 00:14:27,608 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689898467235.914656f487f4e9f20596fe69207baae9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:27,608 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 914656f487f4e9f20596fe69207baae9 2023-07-21 00:14:27,608 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 914656f487f4e9f20596fe69207baae9 2023-07-21 00:14:27,609 INFO [StoreOpener-914656f487f4e9f20596fe69207baae9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 914656f487f4e9f20596fe69207baae9 2023-07-21 00:14:27,611 DEBUG [StoreOpener-914656f487f4e9f20596fe69207baae9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/testRename/914656f487f4e9f20596fe69207baae9/tr 2023-07-21 00:14:27,611 DEBUG [StoreOpener-914656f487f4e9f20596fe69207baae9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/testRename/914656f487f4e9f20596fe69207baae9/tr 2023-07-21 00:14:27,612 INFO [StoreOpener-914656f487f4e9f20596fe69207baae9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 914656f487f4e9f20596fe69207baae9 columnFamilyName tr 2023-07-21 00:14:27,613 INFO [StoreOpener-914656f487f4e9f20596fe69207baae9-1] regionserver.HStore(310): Store=914656f487f4e9f20596fe69207baae9/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:27,613 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/testRename/914656f487f4e9f20596fe69207baae9 2023-07-21 00:14:27,614 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/testRename/914656f487f4e9f20596fe69207baae9 2023-07-21 00:14:27,617 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 914656f487f4e9f20596fe69207baae9 2023-07-21 00:14:27,620 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/testRename/914656f487f4e9f20596fe69207baae9/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 00:14:27,620 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 914656f487f4e9f20596fe69207baae9; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10640089280, jitterRate=-0.009064465761184692}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 00:14:27,620 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 914656f487f4e9f20596fe69207baae9: 2023-07-21 00:14:27,621 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689898467235.914656f487f4e9f20596fe69207baae9., pid=107, masterSystemTime=1689898467602 2023-07-21 00:14:27,623 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689898467235.914656f487f4e9f20596fe69207baae9. 2023-07-21 00:14:27,623 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689898467235.914656f487f4e9f20596fe69207baae9. 2023-07-21 00:14:27,623 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=106 updating hbase:meta row=914656f487f4e9f20596fe69207baae9, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43987,1689898455241 2023-07-21 00:14:27,624 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689898467235.914656f487f4e9f20596fe69207baae9.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689898467623"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689898467623"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689898467623"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689898467623"}]},"ts":"1689898467623"} 2023-07-21 00:14:27,628 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=107, resume processing ppid=106 2023-07-21 00:14:27,628 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=107, ppid=106, state=SUCCESS; OpenRegionProcedure 914656f487f4e9f20596fe69207baae9, server=jenkins-hbase4.apache.org,43987,1689898455241 in 175 msec 2023-07-21 00:14:27,630 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=106, resume processing ppid=105 2023-07-21 00:14:27,630 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=106, ppid=105, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=914656f487f4e9f20596fe69207baae9, ASSIGN in 336 msec 2023-07-21 00:14:27,631 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=105, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 00:14:27,631 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689898467631"}]},"ts":"1689898467631"} 2023-07-21 00:14:27,638 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLED in hbase:meta 2023-07-21 00:14:27,640 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=105, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 00:14:27,642 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=105, state=SUCCESS; CreateTableProcedure table=testRename in 406 msec 2023-07-21 00:14:27,843 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=105 2023-07-21 00:14:27,843 INFO [Listener at localhost/41495] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:testRename, procId: 105 completed 2023-07-21 00:14:27,843 DEBUG [Listener at localhost/41495] hbase.HBaseTestingUtility(3430): Waiting until all regions of table testRename get assigned. Timeout = 60000ms 2023-07-21 00:14:27,843 INFO [Listener at localhost/41495] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:27,847 INFO [Listener at localhost/41495] hbase.HBaseTestingUtility(3484): All regions for table testRename assigned to meta. Checking AM states. 2023-07-21 00:14:27,847 INFO [Listener at localhost/41495] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:27,847 INFO [Listener at localhost/41495] hbase.HBaseTestingUtility(3504): All regions for table testRename assigned. 2023-07-21 00:14:27,849 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup oldgroup 2023-07-21 00:14:27,851 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-21 00:14:27,852 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:27,852 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:27,852 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 00:14:27,854 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup oldgroup 2023-07-21 00:14:27,854 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(345): Moving region 914656f487f4e9f20596fe69207baae9 to RSGroup oldgroup 2023-07-21 00:14:27,854 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 00:14:27,854 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 00:14:27,855 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 00:14:27,855 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 00:14:27,855 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 00:14:27,855 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] procedure2.ProcedureExecutor(1029): Stored pid=108, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=914656f487f4e9f20596fe69207baae9, REOPEN/MOVE 2023-07-21 00:14:27,855 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group oldgroup, current retry=0 2023-07-21 00:14:27,858 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=108, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=914656f487f4e9f20596fe69207baae9, REOPEN/MOVE 2023-07-21 00:14:27,859 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=108 updating hbase:meta row=914656f487f4e9f20596fe69207baae9, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43987,1689898455241 2023-07-21 00:14:27,859 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689898467235.914656f487f4e9f20596fe69207baae9.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689898467859"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898467859"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898467859"}]},"ts":"1689898467859"} 2023-07-21 00:14:27,860 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=109, ppid=108, state=RUNNABLE; CloseRegionProcedure 914656f487f4e9f20596fe69207baae9, server=jenkins-hbase4.apache.org,43987,1689898455241}] 2023-07-21 00:14:28,013 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 914656f487f4e9f20596fe69207baae9 2023-07-21 00:14:28,015 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 914656f487f4e9f20596fe69207baae9, disabling compactions & flushes 2023-07-21 00:14:28,015 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689898467235.914656f487f4e9f20596fe69207baae9. 2023-07-21 00:14:28,015 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689898467235.914656f487f4e9f20596fe69207baae9. 2023-07-21 00:14:28,015 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689898467235.914656f487f4e9f20596fe69207baae9. after waiting 0 ms 2023-07-21 00:14:28,015 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689898467235.914656f487f4e9f20596fe69207baae9. 2023-07-21 00:14:28,019 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/testRename/914656f487f4e9f20596fe69207baae9/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 00:14:28,020 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689898467235.914656f487f4e9f20596fe69207baae9. 2023-07-21 00:14:28,020 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 914656f487f4e9f20596fe69207baae9: 2023-07-21 00:14:28,020 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 914656f487f4e9f20596fe69207baae9 move to jenkins-hbase4.apache.org,42163,1689898450682 record at close sequenceid=2 2023-07-21 00:14:28,022 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 914656f487f4e9f20596fe69207baae9 2023-07-21 00:14:28,022 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=108 updating hbase:meta row=914656f487f4e9f20596fe69207baae9, regionState=CLOSED 2023-07-21 00:14:28,022 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689898467235.914656f487f4e9f20596fe69207baae9.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689898468022"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898468022"}]},"ts":"1689898468022"} 2023-07-21 00:14:28,025 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=109, resume processing ppid=108 2023-07-21 00:14:28,025 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=109, ppid=108, state=SUCCESS; CloseRegionProcedure 914656f487f4e9f20596fe69207baae9, server=jenkins-hbase4.apache.org,43987,1689898455241 in 164 msec 2023-07-21 00:14:28,026 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=108, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=914656f487f4e9f20596fe69207baae9, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,42163,1689898450682; forceNewPlan=false, retain=false 2023-07-21 00:14:28,176 INFO [jenkins-hbase4:33855] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 00:14:28,177 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=108 updating hbase:meta row=914656f487f4e9f20596fe69207baae9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42163,1689898450682 2023-07-21 00:14:28,177 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689898467235.914656f487f4e9f20596fe69207baae9.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689898468176"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898468176"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898468176"}]},"ts":"1689898468176"} 2023-07-21 00:14:28,178 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=110, ppid=108, state=RUNNABLE; OpenRegionProcedure 914656f487f4e9f20596fe69207baae9, server=jenkins-hbase4.apache.org,42163,1689898450682}] 2023-07-21 00:14:28,335 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689898467235.914656f487f4e9f20596fe69207baae9. 2023-07-21 00:14:28,335 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 914656f487f4e9f20596fe69207baae9, NAME => 'testRename,,1689898467235.914656f487f4e9f20596fe69207baae9.', STARTKEY => '', ENDKEY => ''} 2023-07-21 00:14:28,335 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 914656f487f4e9f20596fe69207baae9 2023-07-21 00:14:28,335 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689898467235.914656f487f4e9f20596fe69207baae9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:28,336 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 914656f487f4e9f20596fe69207baae9 2023-07-21 00:14:28,336 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 914656f487f4e9f20596fe69207baae9 2023-07-21 00:14:28,337 INFO [StoreOpener-914656f487f4e9f20596fe69207baae9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 914656f487f4e9f20596fe69207baae9 2023-07-21 00:14:28,338 DEBUG [StoreOpener-914656f487f4e9f20596fe69207baae9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/testRename/914656f487f4e9f20596fe69207baae9/tr 2023-07-21 00:14:28,339 DEBUG [StoreOpener-914656f487f4e9f20596fe69207baae9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/testRename/914656f487f4e9f20596fe69207baae9/tr 2023-07-21 00:14:28,339 INFO [StoreOpener-914656f487f4e9f20596fe69207baae9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 914656f487f4e9f20596fe69207baae9 columnFamilyName tr 2023-07-21 00:14:28,340 INFO [StoreOpener-914656f487f4e9f20596fe69207baae9-1] regionserver.HStore(310): Store=914656f487f4e9f20596fe69207baae9/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:28,341 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/testRename/914656f487f4e9f20596fe69207baae9 2023-07-21 00:14:28,342 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/testRename/914656f487f4e9f20596fe69207baae9 2023-07-21 00:14:28,347 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 914656f487f4e9f20596fe69207baae9 2023-07-21 00:14:28,348 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 914656f487f4e9f20596fe69207baae9; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11339755680, jitterRate=0.05609704554080963}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 00:14:28,348 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 914656f487f4e9f20596fe69207baae9: 2023-07-21 00:14:28,349 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689898467235.914656f487f4e9f20596fe69207baae9., pid=110, masterSystemTime=1689898468330 2023-07-21 00:14:28,357 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689898467235.914656f487f4e9f20596fe69207baae9. 2023-07-21 00:14:28,357 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689898467235.914656f487f4e9f20596fe69207baae9. 2023-07-21 00:14:28,357 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=108 updating hbase:meta row=914656f487f4e9f20596fe69207baae9, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,42163,1689898450682 2023-07-21 00:14:28,358 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689898467235.914656f487f4e9f20596fe69207baae9.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689898468357"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689898468357"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689898468357"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689898468357"}]},"ts":"1689898468357"} 2023-07-21 00:14:28,361 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=110, resume processing ppid=108 2023-07-21 00:14:28,361 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=110, ppid=108, state=SUCCESS; OpenRegionProcedure 914656f487f4e9f20596fe69207baae9, server=jenkins-hbase4.apache.org,42163,1689898450682 in 181 msec 2023-07-21 00:14:28,362 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=108, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=914656f487f4e9f20596fe69207baae9, REOPEN/MOVE in 506 msec 2023-07-21 00:14:28,858 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] procedure.ProcedureSyncWait(216): waitFor pid=108 2023-07-21 00:14:28,858 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group oldgroup. 2023-07-21 00:14:28,858 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 00:14:28,862 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:28,862 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:28,864 INFO [Listener at localhost/41495] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:28,865 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-21 00:14:28,866 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 00:14:28,867 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-21 00:14:28,867 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:28,868 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-21 00:14:28,868 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 00:14:28,870 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 00:14:28,870 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:28,872 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup normal 2023-07-21 00:14:28,874 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-21 00:14:28,875 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-21 00:14:28,879 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:28,880 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:28,880 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 00:14:28,882 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 00:14:28,885 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:28,885 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:28,891 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43987] to rsgroup normal 2023-07-21 00:14:28,894 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-21 00:14:28,894 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-21 00:14:28,895 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:28,896 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:28,898 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 00:14:28,900 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-21 00:14:28,900 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,43987,1689898455241] are moved back to default 2023-07-21 00:14:28,900 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(438): Move servers done: default => normal 2023-07-21 00:14:28,900 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 00:14:28,910 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:28,910 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:28,918 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-21 00:14:28,918 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:28,921 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 00:14:28,922 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] procedure2.ProcedureExecutor(1029): Stored pid=111, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=unmovedTable 2023-07-21 00:14:28,939 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 00:14:28,939 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "unmovedTable" procId is: 111 2023-07-21 00:14:28,940 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=111 2023-07-21 00:14:28,941 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-21 00:14:28,942 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-21 00:14:28,942 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:28,943 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:28,943 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 00:14:28,951 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 00:14:28,953 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/unmovedTable/458c75b1350c6b08d5d38ff954236ee0 2023-07-21 00:14:28,953 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/unmovedTable/458c75b1350c6b08d5d38ff954236ee0 empty. 2023-07-21 00:14:28,954 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/unmovedTable/458c75b1350c6b08d5d38ff954236ee0 2023-07-21 00:14:28,954 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived unmovedTable regions 2023-07-21 00:14:28,975 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/unmovedTable/.tabledesc/.tableinfo.0000000001 2023-07-21 00:14:28,977 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(7675): creating {ENCODED => 458c75b1350c6b08d5d38ff954236ee0, NAME => 'unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp 2023-07-21 00:14:28,993 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:28,993 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1604): Closing 458c75b1350c6b08d5d38ff954236ee0, disabling compactions & flushes 2023-07-21 00:14:28,993 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0. 2023-07-21 00:14:28,993 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0. 2023-07-21 00:14:28,993 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0. after waiting 0 ms 2023-07-21 00:14:28,993 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0. 2023-07-21 00:14:28,993 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1838): Closed unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0. 2023-07-21 00:14:28,994 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1558): Region close journal for 458c75b1350c6b08d5d38ff954236ee0: 2023-07-21 00:14:28,996 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 00:14:28,997 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689898468997"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898468997"}]},"ts":"1689898468997"} 2023-07-21 00:14:28,999 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 00:14:28,999 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 00:14:29,000 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689898468999"}]},"ts":"1689898468999"} 2023-07-21 00:14:29,001 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLING in hbase:meta 2023-07-21 00:14:29,004 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=112, ppid=111, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=458c75b1350c6b08d5d38ff954236ee0, ASSIGN}] 2023-07-21 00:14:29,006 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=112, ppid=111, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=458c75b1350c6b08d5d38ff954236ee0, ASSIGN 2023-07-21 00:14:29,007 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=112, ppid=111, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=458c75b1350c6b08d5d38ff954236ee0, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46101,1689898451098; forceNewPlan=false, retain=false 2023-07-21 00:14:29,041 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=111 2023-07-21 00:14:29,159 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=112 updating hbase:meta row=458c75b1350c6b08d5d38ff954236ee0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46101,1689898451098 2023-07-21 00:14:29,159 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689898469158"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898469158"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898469158"}]},"ts":"1689898469158"} 2023-07-21 00:14:29,161 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=113, ppid=112, state=RUNNABLE; OpenRegionProcedure 458c75b1350c6b08d5d38ff954236ee0, server=jenkins-hbase4.apache.org,46101,1689898451098}] 2023-07-21 00:14:29,242 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=111 2023-07-21 00:14:29,317 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0. 2023-07-21 00:14:29,317 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 458c75b1350c6b08d5d38ff954236ee0, NAME => 'unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0.', STARTKEY => '', ENDKEY => ''} 2023-07-21 00:14:29,317 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 458c75b1350c6b08d5d38ff954236ee0 2023-07-21 00:14:29,318 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:29,318 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 458c75b1350c6b08d5d38ff954236ee0 2023-07-21 00:14:29,318 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 458c75b1350c6b08d5d38ff954236ee0 2023-07-21 00:14:29,319 INFO [StoreOpener-458c75b1350c6b08d5d38ff954236ee0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 458c75b1350c6b08d5d38ff954236ee0 2023-07-21 00:14:29,321 DEBUG [StoreOpener-458c75b1350c6b08d5d38ff954236ee0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/unmovedTable/458c75b1350c6b08d5d38ff954236ee0/ut 2023-07-21 00:14:29,321 DEBUG [StoreOpener-458c75b1350c6b08d5d38ff954236ee0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/unmovedTable/458c75b1350c6b08d5d38ff954236ee0/ut 2023-07-21 00:14:29,321 INFO [StoreOpener-458c75b1350c6b08d5d38ff954236ee0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 458c75b1350c6b08d5d38ff954236ee0 columnFamilyName ut 2023-07-21 00:14:29,322 INFO [StoreOpener-458c75b1350c6b08d5d38ff954236ee0-1] regionserver.HStore(310): Store=458c75b1350c6b08d5d38ff954236ee0/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:29,323 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/unmovedTable/458c75b1350c6b08d5d38ff954236ee0 2023-07-21 00:14:29,323 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/unmovedTable/458c75b1350c6b08d5d38ff954236ee0 2023-07-21 00:14:29,326 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 458c75b1350c6b08d5d38ff954236ee0 2023-07-21 00:14:29,329 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/unmovedTable/458c75b1350c6b08d5d38ff954236ee0/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 00:14:29,329 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 458c75b1350c6b08d5d38ff954236ee0; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9793044320, jitterRate=-0.0879516750574112}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 00:14:29,329 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 458c75b1350c6b08d5d38ff954236ee0: 2023-07-21 00:14:29,331 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0., pid=113, masterSystemTime=1689898469313 2023-07-21 00:14:29,332 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0. 2023-07-21 00:14:29,332 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0. 2023-07-21 00:14:29,333 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=112 updating hbase:meta row=458c75b1350c6b08d5d38ff954236ee0, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46101,1689898451098 2023-07-21 00:14:29,333 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689898469333"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689898469333"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689898469333"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689898469333"}]},"ts":"1689898469333"} 2023-07-21 00:14:29,336 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=113, resume processing ppid=112 2023-07-21 00:14:29,336 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=113, ppid=112, state=SUCCESS; OpenRegionProcedure 458c75b1350c6b08d5d38ff954236ee0, server=jenkins-hbase4.apache.org,46101,1689898451098 in 173 msec 2023-07-21 00:14:29,338 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=112, resume processing ppid=111 2023-07-21 00:14:29,338 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=112, ppid=111, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=458c75b1350c6b08d5d38ff954236ee0, ASSIGN in 332 msec 2023-07-21 00:14:29,339 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 00:14:29,339 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689898469339"}]},"ts":"1689898469339"} 2023-07-21 00:14:29,340 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLED in hbase:meta 2023-07-21 00:14:29,342 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 00:14:29,343 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=111, state=SUCCESS; CreateTableProcedure table=unmovedTable in 421 msec 2023-07-21 00:14:29,544 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-21 00:14:29,544 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=111 2023-07-21 00:14:29,544 INFO [Listener at localhost/41495] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:unmovedTable, procId: 111 completed 2023-07-21 00:14:29,544 DEBUG [Listener at localhost/41495] hbase.HBaseTestingUtility(3430): Waiting until all regions of table unmovedTable get assigned. Timeout = 60000ms 2023-07-21 00:14:29,545 INFO [Listener at localhost/41495] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:29,549 INFO [Listener at localhost/41495] hbase.HBaseTestingUtility(3484): All regions for table unmovedTable assigned to meta. Checking AM states. 2023-07-21 00:14:29,549 INFO [Listener at localhost/41495] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:29,549 INFO [Listener at localhost/41495] hbase.HBaseTestingUtility(3504): All regions for table unmovedTable assigned. 2023-07-21 00:14:29,552 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup normal 2023-07-21 00:14:29,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-21 00:14:29,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-21 00:14:29,565 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:29,565 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:29,566 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 00:14:29,569 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup normal 2023-07-21 00:14:29,570 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(345): Moving region 458c75b1350c6b08d5d38ff954236ee0 to RSGroup normal 2023-07-21 00:14:29,571 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] procedure2.ProcedureExecutor(1029): Stored pid=114, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=458c75b1350c6b08d5d38ff954236ee0, REOPEN/MOVE 2023-07-21 00:14:29,571 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group normal, current retry=0 2023-07-21 00:14:29,571 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=114, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=458c75b1350c6b08d5d38ff954236ee0, REOPEN/MOVE 2023-07-21 00:14:29,572 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=458c75b1350c6b08d5d38ff954236ee0, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46101,1689898451098 2023-07-21 00:14:29,572 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689898469572"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898469572"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898469572"}]},"ts":"1689898469572"} 2023-07-21 00:14:29,574 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=115, ppid=114, state=RUNNABLE; CloseRegionProcedure 458c75b1350c6b08d5d38ff954236ee0, server=jenkins-hbase4.apache.org,46101,1689898451098}] 2023-07-21 00:14:29,727 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 458c75b1350c6b08d5d38ff954236ee0 2023-07-21 00:14:29,729 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 458c75b1350c6b08d5d38ff954236ee0, disabling compactions & flushes 2023-07-21 00:14:29,729 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0. 2023-07-21 00:14:29,729 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0. 2023-07-21 00:14:29,729 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0. after waiting 0 ms 2023-07-21 00:14:29,729 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0. 2023-07-21 00:14:29,733 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/unmovedTable/458c75b1350c6b08d5d38ff954236ee0/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 00:14:29,735 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0. 2023-07-21 00:14:29,735 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 458c75b1350c6b08d5d38ff954236ee0: 2023-07-21 00:14:29,735 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 458c75b1350c6b08d5d38ff954236ee0 move to jenkins-hbase4.apache.org,43987,1689898455241 record at close sequenceid=2 2023-07-21 00:14:29,737 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 458c75b1350c6b08d5d38ff954236ee0 2023-07-21 00:14:29,738 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=458c75b1350c6b08d5d38ff954236ee0, regionState=CLOSED 2023-07-21 00:14:29,738 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689898469738"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898469738"}]},"ts":"1689898469738"} 2023-07-21 00:14:29,741 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=115, resume processing ppid=114 2023-07-21 00:14:29,741 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=115, ppid=114, state=SUCCESS; CloseRegionProcedure 458c75b1350c6b08d5d38ff954236ee0, server=jenkins-hbase4.apache.org,46101,1689898451098 in 165 msec 2023-07-21 00:14:29,742 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=458c75b1350c6b08d5d38ff954236ee0, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43987,1689898455241; forceNewPlan=false, retain=false 2023-07-21 00:14:29,892 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=458c75b1350c6b08d5d38ff954236ee0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43987,1689898455241 2023-07-21 00:14:29,893 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689898469892"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898469892"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898469892"}]},"ts":"1689898469892"} 2023-07-21 00:14:29,894 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=116, ppid=114, state=RUNNABLE; OpenRegionProcedure 458c75b1350c6b08d5d38ff954236ee0, server=jenkins-hbase4.apache.org,43987,1689898455241}] 2023-07-21 00:14:30,049 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0. 2023-07-21 00:14:30,050 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 458c75b1350c6b08d5d38ff954236ee0, NAME => 'unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0.', STARTKEY => '', ENDKEY => ''} 2023-07-21 00:14:30,050 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 458c75b1350c6b08d5d38ff954236ee0 2023-07-21 00:14:30,050 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:30,050 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 458c75b1350c6b08d5d38ff954236ee0 2023-07-21 00:14:30,050 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 458c75b1350c6b08d5d38ff954236ee0 2023-07-21 00:14:30,051 INFO [StoreOpener-458c75b1350c6b08d5d38ff954236ee0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 458c75b1350c6b08d5d38ff954236ee0 2023-07-21 00:14:30,052 DEBUG [StoreOpener-458c75b1350c6b08d5d38ff954236ee0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/unmovedTable/458c75b1350c6b08d5d38ff954236ee0/ut 2023-07-21 00:14:30,052 DEBUG [StoreOpener-458c75b1350c6b08d5d38ff954236ee0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/unmovedTable/458c75b1350c6b08d5d38ff954236ee0/ut 2023-07-21 00:14:30,053 INFO [StoreOpener-458c75b1350c6b08d5d38ff954236ee0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 458c75b1350c6b08d5d38ff954236ee0 columnFamilyName ut 2023-07-21 00:14:30,053 INFO [StoreOpener-458c75b1350c6b08d5d38ff954236ee0-1] regionserver.HStore(310): Store=458c75b1350c6b08d5d38ff954236ee0/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:30,054 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/unmovedTable/458c75b1350c6b08d5d38ff954236ee0 2023-07-21 00:14:30,055 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/unmovedTable/458c75b1350c6b08d5d38ff954236ee0 2023-07-21 00:14:30,058 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 458c75b1350c6b08d5d38ff954236ee0 2023-07-21 00:14:30,059 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 458c75b1350c6b08d5d38ff954236ee0; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11097532160, jitterRate=0.033538222312927246}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 00:14:30,059 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 458c75b1350c6b08d5d38ff954236ee0: 2023-07-21 00:14:30,060 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0., pid=116, masterSystemTime=1689898470046 2023-07-21 00:14:30,061 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0. 2023-07-21 00:14:30,061 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0. 2023-07-21 00:14:30,061 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=458c75b1350c6b08d5d38ff954236ee0, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,43987,1689898455241 2023-07-21 00:14:30,062 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689898470061"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689898470061"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689898470061"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689898470061"}]},"ts":"1689898470061"} 2023-07-21 00:14:30,064 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=116, resume processing ppid=114 2023-07-21 00:14:30,064 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=116, ppid=114, state=SUCCESS; OpenRegionProcedure 458c75b1350c6b08d5d38ff954236ee0, server=jenkins-hbase4.apache.org,43987,1689898455241 in 169 msec 2023-07-21 00:14:30,065 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=114, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=458c75b1350c6b08d5d38ff954236ee0, REOPEN/MOVE in 494 msec 2023-07-21 00:14:30,571 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] procedure.ProcedureSyncWait(216): waitFor pid=114 2023-07-21 00:14:30,571 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group normal. 2023-07-21 00:14:30,571 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 00:14:30,575 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:30,576 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:30,579 INFO [Listener at localhost/41495] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:30,580 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-21 00:14:30,580 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 00:14:30,582 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-21 00:14:30,582 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:30,583 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-21 00:14:30,583 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 00:14:30,584 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldgroup to newgroup 2023-07-21 00:14:30,587 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-21 00:14:30,587 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:30,588 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:30,588 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-21 00:14:30,590 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 9 2023-07-21 00:14:30,592 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RenameRSGroup 2023-07-21 00:14:30,595 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:30,595 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:30,599 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=newgroup 2023-07-21 00:14:30,599 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:30,600 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-21 00:14:30,600 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 00:14:30,601 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-21 00:14:30,601 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 00:14:30,605 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:30,605 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:30,608 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup default 2023-07-21 00:14:30,610 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-21 00:14:30,610 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:30,611 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:30,611 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-21 00:14:30,611 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 00:14:30,613 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup default 2023-07-21 00:14:30,613 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(345): Moving region 458c75b1350c6b08d5d38ff954236ee0 to RSGroup default 2023-07-21 00:14:30,614 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] procedure2.ProcedureExecutor(1029): Stored pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=458c75b1350c6b08d5d38ff954236ee0, REOPEN/MOVE 2023-07-21 00:14:30,614 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-21 00:14:30,614 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=458c75b1350c6b08d5d38ff954236ee0, REOPEN/MOVE 2023-07-21 00:14:30,623 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=458c75b1350c6b08d5d38ff954236ee0, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43987,1689898455241 2023-07-21 00:14:30,623 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689898470623"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898470623"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898470623"}]},"ts":"1689898470623"} 2023-07-21 00:14:30,625 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=118, ppid=117, state=RUNNABLE; CloseRegionProcedure 458c75b1350c6b08d5d38ff954236ee0, server=jenkins-hbase4.apache.org,43987,1689898455241}] 2023-07-21 00:14:30,777 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 458c75b1350c6b08d5d38ff954236ee0 2023-07-21 00:14:30,779 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 458c75b1350c6b08d5d38ff954236ee0, disabling compactions & flushes 2023-07-21 00:14:30,779 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0. 2023-07-21 00:14:30,779 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0. 2023-07-21 00:14:30,779 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0. after waiting 0 ms 2023-07-21 00:14:30,779 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0. 2023-07-21 00:14:30,783 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/unmovedTable/458c75b1350c6b08d5d38ff954236ee0/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 00:14:30,784 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0. 2023-07-21 00:14:30,784 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 458c75b1350c6b08d5d38ff954236ee0: 2023-07-21 00:14:30,784 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 458c75b1350c6b08d5d38ff954236ee0 move to jenkins-hbase4.apache.org,46101,1689898451098 record at close sequenceid=5 2023-07-21 00:14:30,786 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 458c75b1350c6b08d5d38ff954236ee0 2023-07-21 00:14:30,786 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=458c75b1350c6b08d5d38ff954236ee0, regionState=CLOSED 2023-07-21 00:14:30,786 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689898470786"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898470786"}]},"ts":"1689898470786"} 2023-07-21 00:14:30,790 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=118, resume processing ppid=117 2023-07-21 00:14:30,790 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=118, ppid=117, state=SUCCESS; CloseRegionProcedure 458c75b1350c6b08d5d38ff954236ee0, server=jenkins-hbase4.apache.org,43987,1689898455241 in 163 msec 2023-07-21 00:14:30,790 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=458c75b1350c6b08d5d38ff954236ee0, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,46101,1689898451098; forceNewPlan=false, retain=false 2023-07-21 00:14:30,941 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=458c75b1350c6b08d5d38ff954236ee0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46101,1689898451098 2023-07-21 00:14:30,941 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689898470941"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898470941"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898470941"}]},"ts":"1689898470941"} 2023-07-21 00:14:30,943 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=119, ppid=117, state=RUNNABLE; OpenRegionProcedure 458c75b1350c6b08d5d38ff954236ee0, server=jenkins-hbase4.apache.org,46101,1689898451098}] 2023-07-21 00:14:31,101 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0. 2023-07-21 00:14:31,101 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 458c75b1350c6b08d5d38ff954236ee0, NAME => 'unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0.', STARTKEY => '', ENDKEY => ''} 2023-07-21 00:14:31,101 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 458c75b1350c6b08d5d38ff954236ee0 2023-07-21 00:14:31,102 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:31,102 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 458c75b1350c6b08d5d38ff954236ee0 2023-07-21 00:14:31,102 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 458c75b1350c6b08d5d38ff954236ee0 2023-07-21 00:14:31,107 INFO [StoreOpener-458c75b1350c6b08d5d38ff954236ee0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 458c75b1350c6b08d5d38ff954236ee0 2023-07-21 00:14:31,108 DEBUG [StoreOpener-458c75b1350c6b08d5d38ff954236ee0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/unmovedTable/458c75b1350c6b08d5d38ff954236ee0/ut 2023-07-21 00:14:31,108 DEBUG [StoreOpener-458c75b1350c6b08d5d38ff954236ee0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/unmovedTable/458c75b1350c6b08d5d38ff954236ee0/ut 2023-07-21 00:14:31,109 INFO [StoreOpener-458c75b1350c6b08d5d38ff954236ee0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 458c75b1350c6b08d5d38ff954236ee0 columnFamilyName ut 2023-07-21 00:14:31,109 INFO [StoreOpener-458c75b1350c6b08d5d38ff954236ee0-1] regionserver.HStore(310): Store=458c75b1350c6b08d5d38ff954236ee0/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:31,110 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/unmovedTable/458c75b1350c6b08d5d38ff954236ee0 2023-07-21 00:14:31,111 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/unmovedTable/458c75b1350c6b08d5d38ff954236ee0 2023-07-21 00:14:31,115 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 458c75b1350c6b08d5d38ff954236ee0 2023-07-21 00:14:31,116 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 458c75b1350c6b08d5d38ff954236ee0; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11991630720, jitterRate=0.11680763959884644}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 00:14:31,116 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 458c75b1350c6b08d5d38ff954236ee0: 2023-07-21 00:14:31,117 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0., pid=119, masterSystemTime=1689898471095 2023-07-21 00:14:31,118 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0. 2023-07-21 00:14:31,119 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0. 2023-07-21 00:14:31,120 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=458c75b1350c6b08d5d38ff954236ee0, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,46101,1689898451098 2023-07-21 00:14:31,120 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689898471119"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689898471119"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689898471119"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689898471119"}]},"ts":"1689898471119"} 2023-07-21 00:14:31,123 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=119, resume processing ppid=117 2023-07-21 00:14:31,123 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=119, ppid=117, state=SUCCESS; OpenRegionProcedure 458c75b1350c6b08d5d38ff954236ee0, server=jenkins-hbase4.apache.org,46101,1689898451098 in 178 msec 2023-07-21 00:14:31,124 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=117, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=458c75b1350c6b08d5d38ff954236ee0, REOPEN/MOVE in 510 msec 2023-07-21 00:14:31,360 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'unmovedTable' 2023-07-21 00:14:31,361 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'testRename' 2023-07-21 00:14:31,614 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] procedure.ProcedureSyncWait(216): waitFor pid=117 2023-07-21 00:14:31,615 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group default. 2023-07-21 00:14:31,615 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 00:14:31,617 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43987] to rsgroup default 2023-07-21 00:14:31,624 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-21 00:14:31,624 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:31,625 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:31,625 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-21 00:14:31,626 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 00:14:31,634 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group normal, current retry=0 2023-07-21 00:14:31,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,43987,1689898455241] are moved back to normal 2023-07-21 00:14:31,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(438): Move servers done: normal => default 2023-07-21 00:14:31,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 00:14:31,636 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup normal 2023-07-21 00:14:31,643 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:31,643 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:31,644 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-21 00:14:31,644 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-21 00:14:31,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 00:14:31,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 00:14:31,647 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 00:14:31,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 00:14:31,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 00:14:31,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 00:14:31,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 00:14:31,652 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:31,653 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-21 00:14:31,653 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 00:14:31,654 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 00:14:31,657 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup default 2023-07-21 00:14:31,659 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:31,659 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-21 00:14:31,660 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 00:14:31,661 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup default 2023-07-21 00:14:31,661 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(345): Moving region 914656f487f4e9f20596fe69207baae9 to RSGroup default 2023-07-21 00:14:31,662 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] procedure2.ProcedureExecutor(1029): Stored pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=914656f487f4e9f20596fe69207baae9, REOPEN/MOVE 2023-07-21 00:14:31,662 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-21 00:14:31,662 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=914656f487f4e9f20596fe69207baae9, REOPEN/MOVE 2023-07-21 00:14:31,663 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=914656f487f4e9f20596fe69207baae9, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42163,1689898450682 2023-07-21 00:14:31,663 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689898467235.914656f487f4e9f20596fe69207baae9.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689898471663"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898471663"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898471663"}]},"ts":"1689898471663"} 2023-07-21 00:14:31,665 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=120, state=RUNNABLE; CloseRegionProcedure 914656f487f4e9f20596fe69207baae9, server=jenkins-hbase4.apache.org,42163,1689898450682}] 2023-07-21 00:14:31,818 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 914656f487f4e9f20596fe69207baae9 2023-07-21 00:14:31,819 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 914656f487f4e9f20596fe69207baae9, disabling compactions & flushes 2023-07-21 00:14:31,819 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689898467235.914656f487f4e9f20596fe69207baae9. 2023-07-21 00:14:31,819 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689898467235.914656f487f4e9f20596fe69207baae9. 2023-07-21 00:14:31,819 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689898467235.914656f487f4e9f20596fe69207baae9. after waiting 0 ms 2023-07-21 00:14:31,819 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689898467235.914656f487f4e9f20596fe69207baae9. 2023-07-21 00:14:31,823 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/testRename/914656f487f4e9f20596fe69207baae9/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-21 00:14:31,825 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689898467235.914656f487f4e9f20596fe69207baae9. 2023-07-21 00:14:31,825 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 914656f487f4e9f20596fe69207baae9: 2023-07-21 00:14:31,826 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 914656f487f4e9f20596fe69207baae9 move to jenkins-hbase4.apache.org,43987,1689898455241 record at close sequenceid=5 2023-07-21 00:14:31,827 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 914656f487f4e9f20596fe69207baae9 2023-07-21 00:14:31,828 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=914656f487f4e9f20596fe69207baae9, regionState=CLOSED 2023-07-21 00:14:31,828 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689898467235.914656f487f4e9f20596fe69207baae9.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689898471828"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898471828"}]},"ts":"1689898471828"} 2023-07-21 00:14:31,831 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=120 2023-07-21 00:14:31,831 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=120, state=SUCCESS; CloseRegionProcedure 914656f487f4e9f20596fe69207baae9, server=jenkins-hbase4.apache.org,42163,1689898450682 in 164 msec 2023-07-21 00:14:31,831 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=914656f487f4e9f20596fe69207baae9, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43987,1689898455241; forceNewPlan=false, retain=false 2023-07-21 00:14:31,982 INFO [jenkins-hbase4:33855] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 00:14:31,982 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=914656f487f4e9f20596fe69207baae9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43987,1689898455241 2023-07-21 00:14:31,982 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689898467235.914656f487f4e9f20596fe69207baae9.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689898471982"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898471982"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898471982"}]},"ts":"1689898471982"} 2023-07-21 00:14:31,984 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=122, ppid=120, state=RUNNABLE; OpenRegionProcedure 914656f487f4e9f20596fe69207baae9, server=jenkins-hbase4.apache.org,43987,1689898455241}] 2023-07-21 00:14:32,141 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689898467235.914656f487f4e9f20596fe69207baae9. 2023-07-21 00:14:32,141 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 914656f487f4e9f20596fe69207baae9, NAME => 'testRename,,1689898467235.914656f487f4e9f20596fe69207baae9.', STARTKEY => '', ENDKEY => ''} 2023-07-21 00:14:32,142 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 914656f487f4e9f20596fe69207baae9 2023-07-21 00:14:32,142 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689898467235.914656f487f4e9f20596fe69207baae9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:32,142 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 914656f487f4e9f20596fe69207baae9 2023-07-21 00:14:32,142 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 914656f487f4e9f20596fe69207baae9 2023-07-21 00:14:32,149 INFO [StoreOpener-914656f487f4e9f20596fe69207baae9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 914656f487f4e9f20596fe69207baae9 2023-07-21 00:14:32,150 DEBUG [StoreOpener-914656f487f4e9f20596fe69207baae9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/testRename/914656f487f4e9f20596fe69207baae9/tr 2023-07-21 00:14:32,150 DEBUG [StoreOpener-914656f487f4e9f20596fe69207baae9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/testRename/914656f487f4e9f20596fe69207baae9/tr 2023-07-21 00:14:32,151 INFO [StoreOpener-914656f487f4e9f20596fe69207baae9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 914656f487f4e9f20596fe69207baae9 columnFamilyName tr 2023-07-21 00:14:32,152 INFO [StoreOpener-914656f487f4e9f20596fe69207baae9-1] regionserver.HStore(310): Store=914656f487f4e9f20596fe69207baae9/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:32,153 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/testRename/914656f487f4e9f20596fe69207baae9 2023-07-21 00:14:32,155 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/testRename/914656f487f4e9f20596fe69207baae9 2023-07-21 00:14:32,158 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 914656f487f4e9f20596fe69207baae9 2023-07-21 00:14:32,159 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 914656f487f4e9f20596fe69207baae9; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11403996160, jitterRate=0.06207990646362305}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 00:14:32,159 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 914656f487f4e9f20596fe69207baae9: 2023-07-21 00:14:32,160 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689898467235.914656f487f4e9f20596fe69207baae9., pid=122, masterSystemTime=1689898472136 2023-07-21 00:14:32,162 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689898467235.914656f487f4e9f20596fe69207baae9. 2023-07-21 00:14:32,162 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689898467235.914656f487f4e9f20596fe69207baae9. 2023-07-21 00:14:32,162 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=914656f487f4e9f20596fe69207baae9, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,43987,1689898455241 2023-07-21 00:14:32,163 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689898467235.914656f487f4e9f20596fe69207baae9.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689898472162"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689898472162"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689898472162"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689898472162"}]},"ts":"1689898472162"} 2023-07-21 00:14:32,166 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=122, resume processing ppid=120 2023-07-21 00:14:32,166 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=122, ppid=120, state=SUCCESS; OpenRegionProcedure 914656f487f4e9f20596fe69207baae9, server=jenkins-hbase4.apache.org,43987,1689898455241 in 180 msec 2023-07-21 00:14:32,173 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=120, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=914656f487f4e9f20596fe69207baae9, REOPEN/MOVE in 505 msec 2023-07-21 00:14:32,663 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] procedure.ProcedureSyncWait(216): waitFor pid=120 2023-07-21 00:14:32,663 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group default. 2023-07-21 00:14:32,663 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 00:14:32,664 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42163, jenkins-hbase4.apache.org:33545] to rsgroup default 2023-07-21 00:14:32,666 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:32,667 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-21 00:14:32,667 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 00:14:32,669 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group newgroup, current retry=0 2023-07-21 00:14:32,670 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33545,1689898450890, jenkins-hbase4.apache.org,42163,1689898450682] are moved back to newgroup 2023-07-21 00:14:32,670 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(438): Move servers done: newgroup => default 2023-07-21 00:14:32,670 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 00:14:32,671 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup newgroup 2023-07-21 00:14:32,674 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:32,675 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 00:14:32,676 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 00:14:32,678 INFO [Listener at localhost/41495] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 00:14:32,679 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 00:14:32,681 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:32,681 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:32,683 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 00:14:32,688 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 00:14:32,691 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:32,691 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:32,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33855] to rsgroup master 2023-07-21 00:14:32,693 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 00:14:32,693 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.CallRunner(144): callId: 759 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:44654 deadline: 1689899672693, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. 2023-07-21 00:14:32,693 WARN [Listener at localhost/41495] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 00:14:32,695 INFO [Listener at localhost/41495] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:32,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:32,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:32,696 INFO [Listener at localhost/41495] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33545, jenkins-hbase4.apache.org:42163, jenkins-hbase4.apache.org:43987, jenkins-hbase4.apache.org:46101], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 00:14:32,697 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 00:14:32,697 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:32,714 INFO [Listener at localhost/41495] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=496 (was 502), OpenFileDescriptor=736 (was 760), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=574 (was 537) - SystemLoadAverage LEAK? -, ProcessCount=174 (was 177), AvailableMemoryMB=2693 (was 2816) 2023-07-21 00:14:32,731 INFO [Listener at localhost/41495] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=496, OpenFileDescriptor=736, MaxFileDescriptor=60000, SystemLoadAverage=574, ProcessCount=174, AvailableMemoryMB=2693 2023-07-21 00:14:32,731 INFO [Listener at localhost/41495] rsgroup.TestRSGroupsBase(132): testBogusArgs 2023-07-21 00:14:32,735 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:32,736 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:32,737 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 00:14:32,737 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 00:14:32,737 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 00:14:32,738 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 00:14:32,738 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 00:14:32,738 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 00:14:32,742 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:32,743 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 00:14:32,744 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 00:14:32,747 INFO [Listener at localhost/41495] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 00:14:32,747 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 00:14:32,749 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:32,749 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:32,751 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 00:14:32,753 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 00:14:32,755 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:32,755 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:32,757 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33855] to rsgroup master 2023-07-21 00:14:32,757 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 00:14:32,757 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.CallRunner(144): callId: 787 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:44654 deadline: 1689899672757, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. 2023-07-21 00:14:32,758 WARN [Listener at localhost/41495] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 00:14:32,759 INFO [Listener at localhost/41495] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:32,760 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:32,760 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:32,760 INFO [Listener at localhost/41495] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33545, jenkins-hbase4.apache.org:42163, jenkins-hbase4.apache.org:43987, jenkins-hbase4.apache.org:46101], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 00:14:32,761 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 00:14:32,761 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:32,761 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=nonexistent 2023-07-21 00:14:32,761 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 00:14:32,767 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(334): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, server=bogus:123 2023-07-21 00:14:32,767 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfServer 2023-07-21 00:14:32,768 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bogus 2023-07-21 00:14:32,768 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:32,769 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bogus 2023-07-21 00:14:32,769 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:486) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 00:14:32,769 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.CallRunner(144): callId: 799 service: MasterService methodName: ExecMasterService size: 87 connection: 172.31.14.131:44654 deadline: 1689899672769, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist 2023-07-21 00:14:32,771 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [bogus:123] to rsgroup bogus 2023-07-21 00:14:32,771 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.getAndCheckRSGroupInfo(RSGroupAdminServer.java:115) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:398) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 00:14:32,771 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.CallRunner(144): callId: 802 service: MasterService methodName: ExecMasterService size: 96 connection: 172.31.14.131:44654 deadline: 1689899672771, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-21 00:14:32,774 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): master:33855-0x101853a75f20000, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-21 00:14:32,774 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=true 2023-07-21 00:14:32,780 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(292): Client=jenkins//172.31.14.131 balance rsgroup, group=bogus 2023-07-21 00:14:32,780 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.balanceRSGroup(RSGroupAdminServer.java:523) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.balanceRSGroup(RSGroupAdminEndpoint.java:299) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16213) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 00:14:32,780 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.CallRunner(144): callId: 806 service: MasterService methodName: ExecMasterService size: 88 connection: 172.31.14.131:44654 deadline: 1689899672779, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-21 00:14:32,784 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:32,784 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:32,785 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 00:14:32,785 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 00:14:32,785 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 00:14:32,786 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 00:14:32,786 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 00:14:32,787 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 00:14:32,790 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:32,790 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 00:14:32,791 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 00:14:32,794 INFO [Listener at localhost/41495] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 00:14:32,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 00:14:32,796 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:32,797 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:32,798 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 00:14:32,799 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 00:14:32,802 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:32,802 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:32,804 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33855] to rsgroup master 2023-07-21 00:14:32,808 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 00:14:32,808 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.CallRunner(144): callId: 830 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:44654 deadline: 1689899672804, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. 2023-07-21 00:14:32,808 WARN [Listener at localhost/41495] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 00:14:32,810 INFO [Listener at localhost/41495] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:32,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:32,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:32,811 INFO [Listener at localhost/41495] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33545, jenkins-hbase4.apache.org:42163, jenkins-hbase4.apache.org:43987, jenkins-hbase4.apache.org:46101], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 00:14:32,812 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 00:14:32,812 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:32,831 INFO [Listener at localhost/41495] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=500 (was 496) Potentially hanging thread: hconnection-0x5a6bf6db-shared-pool-20 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x27b82929-shared-pool-23 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5a6bf6db-shared-pool-19 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x27b82929-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=736 (was 736), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=574 (was 574), ProcessCount=174 (was 174), AvailableMemoryMB=2691 (was 2693) 2023-07-21 00:14:32,851 INFO [Listener at localhost/41495] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=500, OpenFileDescriptor=736, MaxFileDescriptor=60000, SystemLoadAverage=574, ProcessCount=174, AvailableMemoryMB=2690 2023-07-21 00:14:32,851 INFO [Listener at localhost/41495] rsgroup.TestRSGroupsBase(132): testDisabledTableMove 2023-07-21 00:14:32,856 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:32,856 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:32,857 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 00:14:32,857 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 00:14:32,857 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 00:14:32,858 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 00:14:32,858 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 00:14:32,859 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 00:14:32,863 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:32,864 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 00:14:32,865 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 00:14:32,868 INFO [Listener at localhost/41495] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 00:14:32,869 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 00:14:32,871 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:32,872 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:32,875 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 00:14:32,877 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 00:14:32,880 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:32,880 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:32,882 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33855] to rsgroup master 2023-07-21 00:14:32,882 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 00:14:32,882 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.CallRunner(144): callId: 858 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:44654 deadline: 1689899672882, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. 2023-07-21 00:14:32,883 WARN [Listener at localhost/41495] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 00:14:32,884 INFO [Listener at localhost/41495] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:32,885 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:32,885 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:32,886 INFO [Listener at localhost/41495] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33545, jenkins-hbase4.apache.org:42163, jenkins-hbase4.apache.org:43987, jenkins-hbase4.apache.org:46101], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 00:14:32,887 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 00:14:32,887 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:32,888 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 00:14:32,888 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:32,889 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testDisabledTableMove_557465426 2023-07-21 00:14:32,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:32,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_557465426 2023-07-21 00:14:32,894 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:32,894 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 00:14:32,900 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 00:14:32,902 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:32,903 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:32,905 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42163, jenkins-hbase4.apache.org:33545] to rsgroup Group_testDisabledTableMove_557465426 2023-07-21 00:14:32,907 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:32,908 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_557465426 2023-07-21 00:14:32,908 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:32,908 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 00:14:32,910 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-21 00:14:32,910 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33545,1689898450890, jenkins-hbase4.apache.org,42163,1689898450682] are moved back to default 2023-07-21 00:14:32,910 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testDisabledTableMove_557465426 2023-07-21 00:14:32,910 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 00:14:32,912 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:32,912 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:32,914 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testDisabledTableMove_557465426 2023-07-21 00:14:32,915 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:32,916 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 00:14:32,917 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] procedure2.ProcedureExecutor(1029): Stored pid=123, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testDisabledTableMove 2023-07-21 00:14:32,919 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 00:14:32,920 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testDisabledTableMove" procId is: 123 2023-07-21 00:14:32,921 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-21 00:14:32,921 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:32,922 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_557465426 2023-07-21 00:14:32,922 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:32,922 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 00:14:32,924 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 00:14:32,928 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testDisabledTableMove/ec0c72079c89634c290dd8677a352133 2023-07-21 00:14:32,928 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testDisabledTableMove/bcf4af3afc0216f493476f340ee7a0c7 2023-07-21 00:14:32,928 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testDisabledTableMove/9df457790082a187198f42995536c1f5 2023-07-21 00:14:32,928 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testDisabledTableMove/392f1a4416425cf4e0efeb23580e18e8 2023-07-21 00:14:32,928 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testDisabledTableMove/ac3c447d15e4f378854e9e3eed38bb2a 2023-07-21 00:14:32,929 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testDisabledTableMove/ec0c72079c89634c290dd8677a352133 empty. 2023-07-21 00:14:32,929 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testDisabledTableMove/9df457790082a187198f42995536c1f5 empty. 2023-07-21 00:14:32,929 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testDisabledTableMove/ac3c447d15e4f378854e9e3eed38bb2a empty. 2023-07-21 00:14:32,929 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testDisabledTableMove/bcf4af3afc0216f493476f340ee7a0c7 empty. 2023-07-21 00:14:32,929 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testDisabledTableMove/392f1a4416425cf4e0efeb23580e18e8 empty. 2023-07-21 00:14:32,929 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testDisabledTableMove/ec0c72079c89634c290dd8677a352133 2023-07-21 00:14:32,930 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testDisabledTableMove/bcf4af3afc0216f493476f340ee7a0c7 2023-07-21 00:14:32,930 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testDisabledTableMove/ac3c447d15e4f378854e9e3eed38bb2a 2023-07-21 00:14:32,930 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testDisabledTableMove/392f1a4416425cf4e0efeb23580e18e8 2023-07-21 00:14:32,930 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testDisabledTableMove/9df457790082a187198f42995536c1f5 2023-07-21 00:14:32,930 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-21 00:14:32,948 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testDisabledTableMove/.tabledesc/.tableinfo.0000000001 2023-07-21 00:14:32,949 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 392f1a4416425cf4e0efeb23580e18e8, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689898472916.392f1a4416425cf4e0efeb23580e18e8.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp 2023-07-21 00:14:32,950 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => ac3c447d15e4f378854e9e3eed38bb2a, NAME => 'Group_testDisabledTableMove,aaaaa,1689898472916.ac3c447d15e4f378854e9e3eed38bb2a.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp 2023-07-21 00:14:32,950 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => ec0c72079c89634c290dd8677a352133, NAME => 'Group_testDisabledTableMove,,1689898472916.ec0c72079c89634c290dd8677a352133.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp 2023-07-21 00:14:32,977 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689898472916.ac3c447d15e4f378854e9e3eed38bb2a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:32,977 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing ac3c447d15e4f378854e9e3eed38bb2a, disabling compactions & flushes 2023-07-21 00:14:32,977 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689898472916.ac3c447d15e4f378854e9e3eed38bb2a. 2023-07-21 00:14:32,977 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689898472916.ac3c447d15e4f378854e9e3eed38bb2a. 2023-07-21 00:14:32,977 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689898472916.ac3c447d15e4f378854e9e3eed38bb2a. after waiting 0 ms 2023-07-21 00:14:32,977 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689898472916.ac3c447d15e4f378854e9e3eed38bb2a. 2023-07-21 00:14:32,977 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689898472916.ac3c447d15e4f378854e9e3eed38bb2a. 2023-07-21 00:14:32,977 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for ac3c447d15e4f378854e9e3eed38bb2a: 2023-07-21 00:14:32,978 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => 9df457790082a187198f42995536c1f5, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689898472916.9df457790082a187198f42995536c1f5.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp 2023-07-21 00:14:32,978 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689898472916.ec0c72079c89634c290dd8677a352133.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:32,978 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689898472916.392f1a4416425cf4e0efeb23580e18e8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:32,978 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing ec0c72079c89634c290dd8677a352133, disabling compactions & flushes 2023-07-21 00:14:32,978 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 392f1a4416425cf4e0efeb23580e18e8, disabling compactions & flushes 2023-07-21 00:14:32,978 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689898472916.ec0c72079c89634c290dd8677a352133. 2023-07-21 00:14:32,978 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689898472916.392f1a4416425cf4e0efeb23580e18e8. 2023-07-21 00:14:32,978 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689898472916.392f1a4416425cf4e0efeb23580e18e8. 2023-07-21 00:14:32,978 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689898472916.ec0c72079c89634c290dd8677a352133. 2023-07-21 00:14:32,978 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689898472916.392f1a4416425cf4e0efeb23580e18e8. after waiting 0 ms 2023-07-21 00:14:32,978 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689898472916.ec0c72079c89634c290dd8677a352133. after waiting 0 ms 2023-07-21 00:14:32,978 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689898472916.ec0c72079c89634c290dd8677a352133. 2023-07-21 00:14:32,978 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689898472916.392f1a4416425cf4e0efeb23580e18e8. 2023-07-21 00:14:32,978 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689898472916.ec0c72079c89634c290dd8677a352133. 2023-07-21 00:14:32,979 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689898472916.392f1a4416425cf4e0efeb23580e18e8. 2023-07-21 00:14:32,979 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for ec0c72079c89634c290dd8677a352133: 2023-07-21 00:14:32,979 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 392f1a4416425cf4e0efeb23580e18e8: 2023-07-21 00:14:32,979 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => bcf4af3afc0216f493476f340ee7a0c7, NAME => 'Group_testDisabledTableMove,zzzzz,1689898472916.bcf4af3afc0216f493476f340ee7a0c7.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp 2023-07-21 00:14:32,998 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689898472916.9df457790082a187198f42995536c1f5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:32,998 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing 9df457790082a187198f42995536c1f5, disabling compactions & flushes 2023-07-21 00:14:32,998 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689898472916.9df457790082a187198f42995536c1f5. 2023-07-21 00:14:32,998 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689898472916.9df457790082a187198f42995536c1f5. 2023-07-21 00:14:32,998 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689898472916.9df457790082a187198f42995536c1f5. after waiting 0 ms 2023-07-21 00:14:32,998 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689898472916.9df457790082a187198f42995536c1f5. 2023-07-21 00:14:32,998 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689898472916.9df457790082a187198f42995536c1f5. 2023-07-21 00:14:32,998 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689898472916.bcf4af3afc0216f493476f340ee7a0c7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:32,998 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for 9df457790082a187198f42995536c1f5: 2023-07-21 00:14:32,998 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing bcf4af3afc0216f493476f340ee7a0c7, disabling compactions & flushes 2023-07-21 00:14:32,999 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689898472916.bcf4af3afc0216f493476f340ee7a0c7. 2023-07-21 00:14:32,999 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689898472916.bcf4af3afc0216f493476f340ee7a0c7. 2023-07-21 00:14:32,999 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689898472916.bcf4af3afc0216f493476f340ee7a0c7. after waiting 0 ms 2023-07-21 00:14:32,999 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689898472916.bcf4af3afc0216f493476f340ee7a0c7. 2023-07-21 00:14:32,999 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689898472916.bcf4af3afc0216f493476f340ee7a0c7. 2023-07-21 00:14:32,999 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for bcf4af3afc0216f493476f340ee7a0c7: 2023-07-21 00:14:33,001 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 00:14:33,003 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689898472916.ac3c447d15e4f378854e9e3eed38bb2a.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689898473002"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898473002"}]},"ts":"1689898473002"} 2023-07-21 00:14:33,003 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689898472916.ec0c72079c89634c290dd8677a352133.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689898473002"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898473002"}]},"ts":"1689898473002"} 2023-07-21 00:14:33,003 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689898472916.392f1a4416425cf4e0efeb23580e18e8.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689898473002"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898473002"}]},"ts":"1689898473002"} 2023-07-21 00:14:33,003 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689898472916.9df457790082a187198f42995536c1f5.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689898473002"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898473002"}]},"ts":"1689898473002"} 2023-07-21 00:14:33,003 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689898472916.bcf4af3afc0216f493476f340ee7a0c7.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689898473002"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898473002"}]},"ts":"1689898473002"} 2023-07-21 00:14:33,005 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-21 00:14:33,006 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 00:14:33,006 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689898473006"}]},"ts":"1689898473006"} 2023-07-21 00:14:33,007 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLING in hbase:meta 2023-07-21 00:14:33,011 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 00:14:33,012 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 00:14:33,012 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 00:14:33,012 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 00:14:33,012 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=124, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ec0c72079c89634c290dd8677a352133, ASSIGN}, {pid=125, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ac3c447d15e4f378854e9e3eed38bb2a, ASSIGN}, {pid=126, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=392f1a4416425cf4e0efeb23580e18e8, ASSIGN}, {pid=127, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=9df457790082a187198f42995536c1f5, ASSIGN}, {pid=128, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=bcf4af3afc0216f493476f340ee7a0c7, ASSIGN}] 2023-07-21 00:14:33,014 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=126, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=392f1a4416425cf4e0efeb23580e18e8, ASSIGN 2023-07-21 00:14:33,014 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=125, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ac3c447d15e4f378854e9e3eed38bb2a, ASSIGN 2023-07-21 00:14:33,014 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=124, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ec0c72079c89634c290dd8677a352133, ASSIGN 2023-07-21 00:14:33,015 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=127, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=9df457790082a187198f42995536c1f5, ASSIGN 2023-07-21 00:14:33,015 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=126, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=392f1a4416425cf4e0efeb23580e18e8, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43987,1689898455241; forceNewPlan=false, retain=false 2023-07-21 00:14:33,015 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=125, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ac3c447d15e4f378854e9e3eed38bb2a, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43987,1689898455241; forceNewPlan=false, retain=false 2023-07-21 00:14:33,015 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=127, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=9df457790082a187198f42995536c1f5, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46101,1689898451098; forceNewPlan=false, retain=false 2023-07-21 00:14:33,015 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=124, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ec0c72079c89634c290dd8677a352133, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46101,1689898451098; forceNewPlan=false, retain=false 2023-07-21 00:14:33,016 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=128, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=bcf4af3afc0216f493476f340ee7a0c7, ASSIGN 2023-07-21 00:14:33,016 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=128, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=bcf4af3afc0216f493476f340ee7a0c7, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43987,1689898455241; forceNewPlan=false, retain=false 2023-07-21 00:14:33,022 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-21 00:14:33,165 INFO [jenkins-hbase4:33855] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-21 00:14:33,169 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=127 updating hbase:meta row=9df457790082a187198f42995536c1f5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46101,1689898451098 2023-07-21 00:14:33,169 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=392f1a4416425cf4e0efeb23580e18e8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43987,1689898455241 2023-07-21 00:14:33,169 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689898472916.9df457790082a187198f42995536c1f5.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689898473169"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898473169"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898473169"}]},"ts":"1689898473169"} 2023-07-21 00:14:33,169 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=128 updating hbase:meta row=bcf4af3afc0216f493476f340ee7a0c7, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43987,1689898455241 2023-07-21 00:14:33,169 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=125 updating hbase:meta row=ac3c447d15e4f378854e9e3eed38bb2a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43987,1689898455241 2023-07-21 00:14:33,169 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=124 updating hbase:meta row=ec0c72079c89634c290dd8677a352133, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46101,1689898451098 2023-07-21 00:14:33,170 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689898472916.ac3c447d15e4f378854e9e3eed38bb2a.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689898473169"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898473169"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898473169"}]},"ts":"1689898473169"} 2023-07-21 00:14:33,170 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689898472916.ec0c72079c89634c290dd8677a352133.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689898473169"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898473169"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898473169"}]},"ts":"1689898473169"} 2023-07-21 00:14:33,170 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689898472916.bcf4af3afc0216f493476f340ee7a0c7.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689898473169"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898473169"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898473169"}]},"ts":"1689898473169"} 2023-07-21 00:14:33,169 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689898472916.392f1a4416425cf4e0efeb23580e18e8.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689898473169"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898473169"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898473169"}]},"ts":"1689898473169"} 2023-07-21 00:14:33,171 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=129, ppid=127, state=RUNNABLE; OpenRegionProcedure 9df457790082a187198f42995536c1f5, server=jenkins-hbase4.apache.org,46101,1689898451098}] 2023-07-21 00:14:33,172 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=130, ppid=125, state=RUNNABLE; OpenRegionProcedure ac3c447d15e4f378854e9e3eed38bb2a, server=jenkins-hbase4.apache.org,43987,1689898455241}] 2023-07-21 00:14:33,172 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=131, ppid=124, state=RUNNABLE; OpenRegionProcedure ec0c72079c89634c290dd8677a352133, server=jenkins-hbase4.apache.org,46101,1689898451098}] 2023-07-21 00:14:33,173 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=132, ppid=128, state=RUNNABLE; OpenRegionProcedure bcf4af3afc0216f493476f340ee7a0c7, server=jenkins-hbase4.apache.org,43987,1689898455241}] 2023-07-21 00:14:33,174 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=133, ppid=126, state=RUNNABLE; OpenRegionProcedure 392f1a4416425cf4e0efeb23580e18e8, server=jenkins-hbase4.apache.org,43987,1689898455241}] 2023-07-21 00:14:33,223 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-21 00:14:33,327 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,,1689898472916.ec0c72079c89634c290dd8677a352133. 2023-07-21 00:14:33,327 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ec0c72079c89634c290dd8677a352133, NAME => 'Group_testDisabledTableMove,,1689898472916.ec0c72079c89634c290dd8677a352133.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-21 00:14:33,328 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove ec0c72079c89634c290dd8677a352133 2023-07-21 00:14:33,328 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689898472916.ec0c72079c89634c290dd8677a352133.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:33,328 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ec0c72079c89634c290dd8677a352133 2023-07-21 00:14:33,328 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ec0c72079c89634c290dd8677a352133 2023-07-21 00:14:33,333 INFO [StoreOpener-ec0c72079c89634c290dd8677a352133-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ec0c72079c89634c290dd8677a352133 2023-07-21 00:14:33,333 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,i\xBF\x14i\xBE,1689898472916.392f1a4416425cf4e0efeb23580e18e8. 2023-07-21 00:14:33,334 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 392f1a4416425cf4e0efeb23580e18e8, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689898472916.392f1a4416425cf4e0efeb23580e18e8.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-21 00:14:33,334 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 392f1a4416425cf4e0efeb23580e18e8 2023-07-21 00:14:33,334 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689898472916.392f1a4416425cf4e0efeb23580e18e8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:33,334 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 392f1a4416425cf4e0efeb23580e18e8 2023-07-21 00:14:33,334 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 392f1a4416425cf4e0efeb23580e18e8 2023-07-21 00:14:33,334 DEBUG [StoreOpener-ec0c72079c89634c290dd8677a352133-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testDisabledTableMove/ec0c72079c89634c290dd8677a352133/f 2023-07-21 00:14:33,334 DEBUG [StoreOpener-ec0c72079c89634c290dd8677a352133-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testDisabledTableMove/ec0c72079c89634c290dd8677a352133/f 2023-07-21 00:14:33,335 INFO [StoreOpener-ec0c72079c89634c290dd8677a352133-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ec0c72079c89634c290dd8677a352133 columnFamilyName f 2023-07-21 00:14:33,335 INFO [StoreOpener-ec0c72079c89634c290dd8677a352133-1] regionserver.HStore(310): Store=ec0c72079c89634c290dd8677a352133/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:33,338 INFO [StoreOpener-392f1a4416425cf4e0efeb23580e18e8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 392f1a4416425cf4e0efeb23580e18e8 2023-07-21 00:14:33,339 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testDisabledTableMove/ec0c72079c89634c290dd8677a352133 2023-07-21 00:14:33,339 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testDisabledTableMove/ec0c72079c89634c290dd8677a352133 2023-07-21 00:14:33,340 DEBUG [StoreOpener-392f1a4416425cf4e0efeb23580e18e8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testDisabledTableMove/392f1a4416425cf4e0efeb23580e18e8/f 2023-07-21 00:14:33,340 DEBUG [StoreOpener-392f1a4416425cf4e0efeb23580e18e8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testDisabledTableMove/392f1a4416425cf4e0efeb23580e18e8/f 2023-07-21 00:14:33,340 INFO [StoreOpener-392f1a4416425cf4e0efeb23580e18e8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 392f1a4416425cf4e0efeb23580e18e8 columnFamilyName f 2023-07-21 00:14:33,341 INFO [StoreOpener-392f1a4416425cf4e0efeb23580e18e8-1] regionserver.HStore(310): Store=392f1a4416425cf4e0efeb23580e18e8/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:33,342 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testDisabledTableMove/392f1a4416425cf4e0efeb23580e18e8 2023-07-21 00:14:33,342 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testDisabledTableMove/392f1a4416425cf4e0efeb23580e18e8 2023-07-21 00:14:33,343 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ec0c72079c89634c290dd8677a352133 2023-07-21 00:14:33,345 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testDisabledTableMove/ec0c72079c89634c290dd8677a352133/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 00:14:33,345 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 392f1a4416425cf4e0efeb23580e18e8 2023-07-21 00:14:33,346 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ec0c72079c89634c290dd8677a352133; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10193630880, jitterRate=-0.05064414441585541}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 00:14:33,346 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ec0c72079c89634c290dd8677a352133: 2023-07-21 00:14:33,347 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,,1689898472916.ec0c72079c89634c290dd8677a352133., pid=131, masterSystemTime=1689898473322 2023-07-21 00:14:33,348 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testDisabledTableMove/392f1a4416425cf4e0efeb23580e18e8/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 00:14:33,348 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 392f1a4416425cf4e0efeb23580e18e8; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12036092480, jitterRate=0.12094846367835999}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 00:14:33,349 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,,1689898472916.ec0c72079c89634c290dd8677a352133. 2023-07-21 00:14:33,349 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 392f1a4416425cf4e0efeb23580e18e8: 2023-07-21 00:14:33,349 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,,1689898472916.ec0c72079c89634c290dd8677a352133. 2023-07-21 00:14:33,349 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689898472916.9df457790082a187198f42995536c1f5. 2023-07-21 00:14:33,349 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9df457790082a187198f42995536c1f5, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689898472916.9df457790082a187198f42995536c1f5.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-21 00:14:33,349 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=124 updating hbase:meta row=ec0c72079c89634c290dd8677a352133, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46101,1689898451098 2023-07-21 00:14:33,349 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,,1689898472916.ec0c72079c89634c290dd8677a352133.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689898473349"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689898473349"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689898473349"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689898473349"}]},"ts":"1689898473349"} 2023-07-21 00:14:33,349 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689898472916.392f1a4416425cf4e0efeb23580e18e8., pid=133, masterSystemTime=1689898473325 2023-07-21 00:14:33,349 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 9df457790082a187198f42995536c1f5 2023-07-21 00:14:33,349 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689898472916.9df457790082a187198f42995536c1f5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:33,350 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9df457790082a187198f42995536c1f5 2023-07-21 00:14:33,350 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9df457790082a187198f42995536c1f5 2023-07-21 00:14:33,351 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689898472916.392f1a4416425cf4e0efeb23580e18e8. 2023-07-21 00:14:33,351 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,i\xBF\x14i\xBE,1689898472916.392f1a4416425cf4e0efeb23580e18e8. 2023-07-21 00:14:33,351 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,aaaaa,1689898472916.ac3c447d15e4f378854e9e3eed38bb2a. 2023-07-21 00:14:33,351 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ac3c447d15e4f378854e9e3eed38bb2a, NAME => 'Group_testDisabledTableMove,aaaaa,1689898472916.ac3c447d15e4f378854e9e3eed38bb2a.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-21 00:14:33,351 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=392f1a4416425cf4e0efeb23580e18e8, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43987,1689898455241 2023-07-21 00:14:33,351 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689898472916.392f1a4416425cf4e0efeb23580e18e8.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689898473351"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689898473351"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689898473351"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689898473351"}]},"ts":"1689898473351"} 2023-07-21 00:14:33,351 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove ac3c447d15e4f378854e9e3eed38bb2a 2023-07-21 00:14:33,352 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689898472916.ac3c447d15e4f378854e9e3eed38bb2a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:33,352 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ac3c447d15e4f378854e9e3eed38bb2a 2023-07-21 00:14:33,352 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ac3c447d15e4f378854e9e3eed38bb2a 2023-07-21 00:14:33,352 INFO [StoreOpener-9df457790082a187198f42995536c1f5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9df457790082a187198f42995536c1f5 2023-07-21 00:14:33,354 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=131, resume processing ppid=124 2023-07-21 00:14:33,354 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=124, state=SUCCESS; OpenRegionProcedure ec0c72079c89634c290dd8677a352133, server=jenkins-hbase4.apache.org,46101,1689898451098 in 179 msec 2023-07-21 00:14:33,354 DEBUG [StoreOpener-9df457790082a187198f42995536c1f5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testDisabledTableMove/9df457790082a187198f42995536c1f5/f 2023-07-21 00:14:33,354 DEBUG [StoreOpener-9df457790082a187198f42995536c1f5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testDisabledTableMove/9df457790082a187198f42995536c1f5/f 2023-07-21 00:14:33,355 INFO [StoreOpener-ac3c447d15e4f378854e9e3eed38bb2a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ac3c447d15e4f378854e9e3eed38bb2a 2023-07-21 00:14:33,355 INFO [StoreOpener-9df457790082a187198f42995536c1f5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9df457790082a187198f42995536c1f5 columnFamilyName f 2023-07-21 00:14:33,356 INFO [StoreOpener-9df457790082a187198f42995536c1f5-1] regionserver.HStore(310): Store=9df457790082a187198f42995536c1f5/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:33,356 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=124, ppid=123, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ec0c72079c89634c290dd8677a352133, ASSIGN in 342 msec 2023-07-21 00:14:33,357 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=133, resume processing ppid=126 2023-07-21 00:14:33,357 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=133, ppid=126, state=SUCCESS; OpenRegionProcedure 392f1a4416425cf4e0efeb23580e18e8, server=jenkins-hbase4.apache.org,43987,1689898455241 in 180 msec 2023-07-21 00:14:33,358 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=126, ppid=123, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=392f1a4416425cf4e0efeb23580e18e8, ASSIGN in 345 msec 2023-07-21 00:14:33,359 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testDisabledTableMove/9df457790082a187198f42995536c1f5 2023-07-21 00:14:33,359 DEBUG [StoreOpener-ac3c447d15e4f378854e9e3eed38bb2a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testDisabledTableMove/ac3c447d15e4f378854e9e3eed38bb2a/f 2023-07-21 00:14:33,359 DEBUG [StoreOpener-ac3c447d15e4f378854e9e3eed38bb2a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testDisabledTableMove/ac3c447d15e4f378854e9e3eed38bb2a/f 2023-07-21 00:14:33,359 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testDisabledTableMove/9df457790082a187198f42995536c1f5 2023-07-21 00:14:33,359 INFO [StoreOpener-ac3c447d15e4f378854e9e3eed38bb2a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ac3c447d15e4f378854e9e3eed38bb2a columnFamilyName f 2023-07-21 00:14:33,360 INFO [StoreOpener-ac3c447d15e4f378854e9e3eed38bb2a-1] regionserver.HStore(310): Store=ac3c447d15e4f378854e9e3eed38bb2a/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:33,361 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testDisabledTableMove/ac3c447d15e4f378854e9e3eed38bb2a 2023-07-21 00:14:33,362 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testDisabledTableMove/ac3c447d15e4f378854e9e3eed38bb2a 2023-07-21 00:14:33,363 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9df457790082a187198f42995536c1f5 2023-07-21 00:14:33,366 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ac3c447d15e4f378854e9e3eed38bb2a 2023-07-21 00:14:33,366 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testDisabledTableMove/9df457790082a187198f42995536c1f5/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 00:14:33,367 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9df457790082a187198f42995536c1f5; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10412931520, jitterRate=-0.03022018074989319}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 00:14:33,367 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9df457790082a187198f42995536c1f5: 2023-07-21 00:14:33,367 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689898472916.9df457790082a187198f42995536c1f5., pid=129, masterSystemTime=1689898473322 2023-07-21 00:14:33,368 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testDisabledTableMove/ac3c447d15e4f378854e9e3eed38bb2a/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 00:14:33,369 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ac3c447d15e4f378854e9e3eed38bb2a; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10525120800, jitterRate=-0.019771739840507507}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 00:14:33,369 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ac3c447d15e4f378854e9e3eed38bb2a: 2023-07-21 00:14:33,369 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689898472916.9df457790082a187198f42995536c1f5. 2023-07-21 00:14:33,369 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,aaaaa,1689898472916.ac3c447d15e4f378854e9e3eed38bb2a., pid=130, masterSystemTime=1689898473325 2023-07-21 00:14:33,369 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689898472916.9df457790082a187198f42995536c1f5. 2023-07-21 00:14:33,370 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=127 updating hbase:meta row=9df457790082a187198f42995536c1f5, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46101,1689898451098 2023-07-21 00:14:33,370 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689898472916.9df457790082a187198f42995536c1f5.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689898473370"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689898473370"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689898473370"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689898473370"}]},"ts":"1689898473370"} 2023-07-21 00:14:33,371 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,aaaaa,1689898472916.ac3c447d15e4f378854e9e3eed38bb2a. 2023-07-21 00:14:33,371 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,aaaaa,1689898472916.ac3c447d15e4f378854e9e3eed38bb2a. 2023-07-21 00:14:33,371 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,zzzzz,1689898472916.bcf4af3afc0216f493476f340ee7a0c7. 2023-07-21 00:14:33,371 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => bcf4af3afc0216f493476f340ee7a0c7, NAME => 'Group_testDisabledTableMove,zzzzz,1689898472916.bcf4af3afc0216f493476f340ee7a0c7.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-21 00:14:33,372 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove bcf4af3afc0216f493476f340ee7a0c7 2023-07-21 00:14:33,372 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689898472916.bcf4af3afc0216f493476f340ee7a0c7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:33,372 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=125 updating hbase:meta row=ac3c447d15e4f378854e9e3eed38bb2a, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43987,1689898455241 2023-07-21 00:14:33,372 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for bcf4af3afc0216f493476f340ee7a0c7 2023-07-21 00:14:33,372 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for bcf4af3afc0216f493476f340ee7a0c7 2023-07-21 00:14:33,372 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,aaaaa,1689898472916.ac3c447d15e4f378854e9e3eed38bb2a.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689898473372"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689898473372"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689898473372"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689898473372"}]},"ts":"1689898473372"} 2023-07-21 00:14:33,373 INFO [StoreOpener-bcf4af3afc0216f493476f340ee7a0c7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region bcf4af3afc0216f493476f340ee7a0c7 2023-07-21 00:14:33,374 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=129, resume processing ppid=127 2023-07-21 00:14:33,374 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=129, ppid=127, state=SUCCESS; OpenRegionProcedure 9df457790082a187198f42995536c1f5, server=jenkins-hbase4.apache.org,46101,1689898451098 in 201 msec 2023-07-21 00:14:33,375 DEBUG [StoreOpener-bcf4af3afc0216f493476f340ee7a0c7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testDisabledTableMove/bcf4af3afc0216f493476f340ee7a0c7/f 2023-07-21 00:14:33,375 DEBUG [StoreOpener-bcf4af3afc0216f493476f340ee7a0c7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testDisabledTableMove/bcf4af3afc0216f493476f340ee7a0c7/f 2023-07-21 00:14:33,375 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=130, resume processing ppid=125 2023-07-21 00:14:33,375 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=123, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=9df457790082a187198f42995536c1f5, ASSIGN in 362 msec 2023-07-21 00:14:33,376 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=125, state=SUCCESS; OpenRegionProcedure ac3c447d15e4f378854e9e3eed38bb2a, server=jenkins-hbase4.apache.org,43987,1689898455241 in 203 msec 2023-07-21 00:14:33,376 INFO [StoreOpener-bcf4af3afc0216f493476f340ee7a0c7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region bcf4af3afc0216f493476f340ee7a0c7 columnFamilyName f 2023-07-21 00:14:33,376 INFO [StoreOpener-bcf4af3afc0216f493476f340ee7a0c7-1] regionserver.HStore(310): Store=bcf4af3afc0216f493476f340ee7a0c7/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:33,377 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=125, ppid=123, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ac3c447d15e4f378854e9e3eed38bb2a, ASSIGN in 364 msec 2023-07-21 00:14:33,377 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testDisabledTableMove/bcf4af3afc0216f493476f340ee7a0c7 2023-07-21 00:14:33,378 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testDisabledTableMove/bcf4af3afc0216f493476f340ee7a0c7 2023-07-21 00:14:33,381 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for bcf4af3afc0216f493476f340ee7a0c7 2023-07-21 00:14:33,383 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testDisabledTableMove/bcf4af3afc0216f493476f340ee7a0c7/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 00:14:33,384 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened bcf4af3afc0216f493476f340ee7a0c7; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11441192800, jitterRate=0.06554411351680756}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 00:14:33,384 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for bcf4af3afc0216f493476f340ee7a0c7: 2023-07-21 00:14:33,384 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,zzzzz,1689898472916.bcf4af3afc0216f493476f340ee7a0c7., pid=132, masterSystemTime=1689898473325 2023-07-21 00:14:33,386 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,zzzzz,1689898472916.bcf4af3afc0216f493476f340ee7a0c7. 2023-07-21 00:14:33,386 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,zzzzz,1689898472916.bcf4af3afc0216f493476f340ee7a0c7. 2023-07-21 00:14:33,386 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=128 updating hbase:meta row=bcf4af3afc0216f493476f340ee7a0c7, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43987,1689898455241 2023-07-21 00:14:33,386 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,zzzzz,1689898472916.bcf4af3afc0216f493476f340ee7a0c7.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689898473386"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689898473386"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689898473386"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689898473386"}]},"ts":"1689898473386"} 2023-07-21 00:14:33,389 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=132, resume processing ppid=128 2023-07-21 00:14:33,389 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=132, ppid=128, state=SUCCESS; OpenRegionProcedure bcf4af3afc0216f493476f340ee7a0c7, server=jenkins-hbase4.apache.org,43987,1689898455241 in 215 msec 2023-07-21 00:14:33,391 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=128, resume processing ppid=123 2023-07-21 00:14:33,391 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=128, ppid=123, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=bcf4af3afc0216f493476f340ee7a0c7, ASSIGN in 377 msec 2023-07-21 00:14:33,391 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 00:14:33,392 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689898473391"}]},"ts":"1689898473391"} 2023-07-21 00:14:33,393 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLED in hbase:meta 2023-07-21 00:14:33,399 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 00:14:33,400 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=123, state=SUCCESS; CreateTableProcedure table=Group_testDisabledTableMove in 482 msec 2023-07-21 00:14:33,495 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testDisabledTableMove' 2023-07-21 00:14:33,524 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-21 00:14:33,524 INFO [Listener at localhost/41495] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testDisabledTableMove, procId: 123 completed 2023-07-21 00:14:33,524 DEBUG [Listener at localhost/41495] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testDisabledTableMove get assigned. Timeout = 60000ms 2023-07-21 00:14:33,524 INFO [Listener at localhost/41495] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:33,528 INFO [Listener at localhost/41495] hbase.HBaseTestingUtility(3484): All regions for table Group_testDisabledTableMove assigned to meta. Checking AM states. 2023-07-21 00:14:33,528 INFO [Listener at localhost/41495] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:33,528 INFO [Listener at localhost/41495] hbase.HBaseTestingUtility(3504): All regions for table Group_testDisabledTableMove assigned. 2023-07-21 00:14:33,529 INFO [Listener at localhost/41495] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:33,535 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-21 00:14:33,535 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 00:14:33,535 INFO [Listener at localhost/41495] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-21 00:14:33,536 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-21 00:14:33,537 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] procedure2.ProcedureExecutor(1029): Stored pid=134, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testDisabledTableMove 2023-07-21 00:14:33,539 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=134 2023-07-21 00:14:33,539 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689898473539"}]},"ts":"1689898473539"} 2023-07-21 00:14:33,541 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLING in hbase:meta 2023-07-21 00:14:33,542 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set Group_testDisabledTableMove to state=DISABLING 2023-07-21 00:14:33,543 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=135, ppid=134, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ec0c72079c89634c290dd8677a352133, UNASSIGN}, {pid=136, ppid=134, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ac3c447d15e4f378854e9e3eed38bb2a, UNASSIGN}, {pid=137, ppid=134, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=392f1a4416425cf4e0efeb23580e18e8, UNASSIGN}, {pid=138, ppid=134, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=9df457790082a187198f42995536c1f5, UNASSIGN}, {pid=139, ppid=134, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=bcf4af3afc0216f493476f340ee7a0c7, UNASSIGN}] 2023-07-21 00:14:33,545 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=139, ppid=134, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=bcf4af3afc0216f493476f340ee7a0c7, UNASSIGN 2023-07-21 00:14:33,545 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=136, ppid=134, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ac3c447d15e4f378854e9e3eed38bb2a, UNASSIGN 2023-07-21 00:14:33,545 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=135, ppid=134, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ec0c72079c89634c290dd8677a352133, UNASSIGN 2023-07-21 00:14:33,546 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=137, ppid=134, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=392f1a4416425cf4e0efeb23580e18e8, UNASSIGN 2023-07-21 00:14:33,546 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=138, ppid=134, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=9df457790082a187198f42995536c1f5, UNASSIGN 2023-07-21 00:14:33,547 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=139 updating hbase:meta row=bcf4af3afc0216f493476f340ee7a0c7, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43987,1689898455241 2023-07-21 00:14:33,547 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=ec0c72079c89634c290dd8677a352133, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46101,1689898451098 2023-07-21 00:14:33,547 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=ac3c447d15e4f378854e9e3eed38bb2a, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43987,1689898455241 2023-07-21 00:14:33,547 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689898472916.ec0c72079c89634c290dd8677a352133.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689898473547"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898473547"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898473547"}]},"ts":"1689898473547"} 2023-07-21 00:14:33,547 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689898472916.bcf4af3afc0216f493476f340ee7a0c7.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689898473547"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898473547"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898473547"}]},"ts":"1689898473547"} 2023-07-21 00:14:33,547 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689898472916.ac3c447d15e4f378854e9e3eed38bb2a.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689898473547"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898473547"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898473547"}]},"ts":"1689898473547"} 2023-07-21 00:14:33,547 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=392f1a4416425cf4e0efeb23580e18e8, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43987,1689898455241 2023-07-21 00:14:33,547 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=138 updating hbase:meta row=9df457790082a187198f42995536c1f5, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46101,1689898451098 2023-07-21 00:14:33,547 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689898472916.392f1a4416425cf4e0efeb23580e18e8.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689898473547"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898473547"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898473547"}]},"ts":"1689898473547"} 2023-07-21 00:14:33,547 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689898472916.9df457790082a187198f42995536c1f5.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689898473547"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898473547"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898473547"}]},"ts":"1689898473547"} 2023-07-21 00:14:33,548 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=140, ppid=135, state=RUNNABLE; CloseRegionProcedure ec0c72079c89634c290dd8677a352133, server=jenkins-hbase4.apache.org,46101,1689898451098}] 2023-07-21 00:14:33,549 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=141, ppid=139, state=RUNNABLE; CloseRegionProcedure bcf4af3afc0216f493476f340ee7a0c7, server=jenkins-hbase4.apache.org,43987,1689898455241}] 2023-07-21 00:14:33,550 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=142, ppid=136, state=RUNNABLE; CloseRegionProcedure ac3c447d15e4f378854e9e3eed38bb2a, server=jenkins-hbase4.apache.org,43987,1689898455241}] 2023-07-21 00:14:33,550 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=143, ppid=137, state=RUNNABLE; CloseRegionProcedure 392f1a4416425cf4e0efeb23580e18e8, server=jenkins-hbase4.apache.org,43987,1689898455241}] 2023-07-21 00:14:33,551 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=144, ppid=138, state=RUNNABLE; CloseRegionProcedure 9df457790082a187198f42995536c1f5, server=jenkins-hbase4.apache.org,46101,1689898451098}] 2023-07-21 00:14:33,640 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=134 2023-07-21 00:14:33,704 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9df457790082a187198f42995536c1f5 2023-07-21 00:14:33,704 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close bcf4af3afc0216f493476f340ee7a0c7 2023-07-21 00:14:33,705 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9df457790082a187198f42995536c1f5, disabling compactions & flushes 2023-07-21 00:14:33,706 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing bcf4af3afc0216f493476f340ee7a0c7, disabling compactions & flushes 2023-07-21 00:14:33,706 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689898472916.9df457790082a187198f42995536c1f5. 2023-07-21 00:14:33,706 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689898472916.bcf4af3afc0216f493476f340ee7a0c7. 2023-07-21 00:14:33,706 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689898472916.9df457790082a187198f42995536c1f5. 2023-07-21 00:14:33,706 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689898472916.bcf4af3afc0216f493476f340ee7a0c7. 2023-07-21 00:14:33,706 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689898472916.9df457790082a187198f42995536c1f5. after waiting 0 ms 2023-07-21 00:14:33,706 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689898472916.bcf4af3afc0216f493476f340ee7a0c7. after waiting 0 ms 2023-07-21 00:14:33,706 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689898472916.9df457790082a187198f42995536c1f5. 2023-07-21 00:14:33,706 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689898472916.bcf4af3afc0216f493476f340ee7a0c7. 2023-07-21 00:14:33,712 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testDisabledTableMove/bcf4af3afc0216f493476f340ee7a0c7/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 00:14:33,713 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testDisabledTableMove/9df457790082a187198f42995536c1f5/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 00:14:33,714 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689898472916.bcf4af3afc0216f493476f340ee7a0c7. 2023-07-21 00:14:33,714 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for bcf4af3afc0216f493476f340ee7a0c7: 2023-07-21 00:14:33,714 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689898472916.9df457790082a187198f42995536c1f5. 2023-07-21 00:14:33,714 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9df457790082a187198f42995536c1f5: 2023-07-21 00:14:33,716 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed bcf4af3afc0216f493476f340ee7a0c7 2023-07-21 00:14:33,716 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 392f1a4416425cf4e0efeb23580e18e8 2023-07-21 00:14:33,717 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 392f1a4416425cf4e0efeb23580e18e8, disabling compactions & flushes 2023-07-21 00:14:33,717 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689898472916.392f1a4416425cf4e0efeb23580e18e8. 2023-07-21 00:14:33,717 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689898472916.392f1a4416425cf4e0efeb23580e18e8. 2023-07-21 00:14:33,717 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689898472916.392f1a4416425cf4e0efeb23580e18e8. after waiting 0 ms 2023-07-21 00:14:33,717 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689898472916.392f1a4416425cf4e0efeb23580e18e8. 2023-07-21 00:14:33,718 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=139 updating hbase:meta row=bcf4af3afc0216f493476f340ee7a0c7, regionState=CLOSED 2023-07-21 00:14:33,718 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689898472916.bcf4af3afc0216f493476f340ee7a0c7.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689898473718"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898473718"}]},"ts":"1689898473718"} 2023-07-21 00:14:33,718 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9df457790082a187198f42995536c1f5 2023-07-21 00:14:33,718 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close ec0c72079c89634c290dd8677a352133 2023-07-21 00:14:33,719 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ec0c72079c89634c290dd8677a352133, disabling compactions & flushes 2023-07-21 00:14:33,720 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689898472916.ec0c72079c89634c290dd8677a352133. 2023-07-21 00:14:33,720 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689898472916.ec0c72079c89634c290dd8677a352133. 2023-07-21 00:14:33,720 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689898472916.ec0c72079c89634c290dd8677a352133. after waiting 0 ms 2023-07-21 00:14:33,720 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689898472916.ec0c72079c89634c290dd8677a352133. 2023-07-21 00:14:33,720 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=138 updating hbase:meta row=9df457790082a187198f42995536c1f5, regionState=CLOSED 2023-07-21 00:14:33,720 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689898472916.9df457790082a187198f42995536c1f5.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689898473720"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898473720"}]},"ts":"1689898473720"} 2023-07-21 00:14:33,723 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=141, resume processing ppid=139 2023-07-21 00:14:33,723 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=139, state=SUCCESS; CloseRegionProcedure bcf4af3afc0216f493476f340ee7a0c7, server=jenkins-hbase4.apache.org,43987,1689898455241 in 172 msec 2023-07-21 00:14:33,723 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=144, resume processing ppid=138 2023-07-21 00:14:33,723 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=144, ppid=138, state=SUCCESS; CloseRegionProcedure 9df457790082a187198f42995536c1f5, server=jenkins-hbase4.apache.org,46101,1689898451098 in 170 msec 2023-07-21 00:14:33,724 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testDisabledTableMove/392f1a4416425cf4e0efeb23580e18e8/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 00:14:33,724 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testDisabledTableMove/ec0c72079c89634c290dd8677a352133/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 00:14:33,725 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689898472916.392f1a4416425cf4e0efeb23580e18e8. 2023-07-21 00:14:33,725 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 392f1a4416425cf4e0efeb23580e18e8: 2023-07-21 00:14:33,725 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689898472916.ec0c72079c89634c290dd8677a352133. 2023-07-21 00:14:33,725 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ec0c72079c89634c290dd8677a352133: 2023-07-21 00:14:33,727 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 392f1a4416425cf4e0efeb23580e18e8 2023-07-21 00:14:33,727 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close ac3c447d15e4f378854e9e3eed38bb2a 2023-07-21 00:14:33,727 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=139, ppid=134, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=bcf4af3afc0216f493476f340ee7a0c7, UNASSIGN in 180 msec 2023-07-21 00:14:33,728 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ac3c447d15e4f378854e9e3eed38bb2a, disabling compactions & flushes 2023-07-21 00:14:33,728 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=138, ppid=134, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=9df457790082a187198f42995536c1f5, UNASSIGN in 180 msec 2023-07-21 00:14:33,728 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689898472916.ac3c447d15e4f378854e9e3eed38bb2a. 2023-07-21 00:14:33,729 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689898472916.ac3c447d15e4f378854e9e3eed38bb2a. 2023-07-21 00:14:33,729 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689898472916.ac3c447d15e4f378854e9e3eed38bb2a. after waiting 0 ms 2023-07-21 00:14:33,729 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689898472916.ac3c447d15e4f378854e9e3eed38bb2a. 2023-07-21 00:14:33,730 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=392f1a4416425cf4e0efeb23580e18e8, regionState=CLOSED 2023-07-21 00:14:33,730 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689898472916.392f1a4416425cf4e0efeb23580e18e8.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689898473730"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898473730"}]},"ts":"1689898473730"} 2023-07-21 00:14:33,730 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed ec0c72079c89634c290dd8677a352133 2023-07-21 00:14:33,731 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=ec0c72079c89634c290dd8677a352133, regionState=CLOSED 2023-07-21 00:14:33,731 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689898472916.ec0c72079c89634c290dd8677a352133.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689898473730"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898473730"}]},"ts":"1689898473730"} 2023-07-21 00:14:33,733 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/Group_testDisabledTableMove/ac3c447d15e4f378854e9e3eed38bb2a/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 00:14:33,734 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689898472916.ac3c447d15e4f378854e9e3eed38bb2a. 2023-07-21 00:14:33,734 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ac3c447d15e4f378854e9e3eed38bb2a: 2023-07-21 00:14:33,735 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=143, resume processing ppid=137 2023-07-21 00:14:33,735 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=140, resume processing ppid=135 2023-07-21 00:14:33,735 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=140, ppid=135, state=SUCCESS; CloseRegionProcedure ec0c72079c89634c290dd8677a352133, server=jenkins-hbase4.apache.org,46101,1689898451098 in 185 msec 2023-07-21 00:14:33,735 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed ac3c447d15e4f378854e9e3eed38bb2a 2023-07-21 00:14:33,735 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=143, ppid=137, state=SUCCESS; CloseRegionProcedure 392f1a4416425cf4e0efeb23580e18e8, server=jenkins-hbase4.apache.org,43987,1689898455241 in 183 msec 2023-07-21 00:14:33,736 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=ac3c447d15e4f378854e9e3eed38bb2a, regionState=CLOSED 2023-07-21 00:14:33,736 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689898472916.ac3c447d15e4f378854e9e3eed38bb2a.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689898473736"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898473736"}]},"ts":"1689898473736"} 2023-07-21 00:14:33,737 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=135, ppid=134, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ec0c72079c89634c290dd8677a352133, UNASSIGN in 192 msec 2023-07-21 00:14:33,737 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=137, ppid=134, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=392f1a4416425cf4e0efeb23580e18e8, UNASSIGN in 192 msec 2023-07-21 00:14:33,739 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=142, resume processing ppid=136 2023-07-21 00:14:33,739 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=142, ppid=136, state=SUCCESS; CloseRegionProcedure ac3c447d15e4f378854e9e3eed38bb2a, server=jenkins-hbase4.apache.org,43987,1689898455241 in 188 msec 2023-07-21 00:14:33,741 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=136, resume processing ppid=134 2023-07-21 00:14:33,741 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=136, ppid=134, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ac3c447d15e4f378854e9e3eed38bb2a, UNASSIGN in 196 msec 2023-07-21 00:14:33,741 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689898473741"}]},"ts":"1689898473741"} 2023-07-21 00:14:33,742 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLED in hbase:meta 2023-07-21 00:14:33,744 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set Group_testDisabledTableMove to state=DISABLED 2023-07-21 00:14:33,746 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=134, state=SUCCESS; DisableTableProcedure table=Group_testDisabledTableMove in 209 msec 2023-07-21 00:14:33,841 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=134 2023-07-21 00:14:33,842 INFO [Listener at localhost/41495] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testDisabledTableMove, procId: 134 completed 2023-07-21 00:14:33,842 INFO [Listener at localhost/41495] rsgroup.TestRSGroupsAdmin1(370): Moving table Group_testDisabledTableMove to Group_testDisabledTableMove_557465426 2023-07-21 00:14:33,844 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testDisabledTableMove] to rsgroup Group_testDisabledTableMove_557465426 2023-07-21 00:14:33,846 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:33,846 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_557465426 2023-07-21 00:14:33,846 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:33,847 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 00:14:33,849 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(336): Skipping move regions because the table Group_testDisabledTableMove is disabled 2023-07-21 00:14:33,849 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_557465426, current retry=0 2023-07-21 00:14:33,849 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testDisabledTableMove] moved to target group Group_testDisabledTableMove_557465426. 2023-07-21 00:14:33,849 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 00:14:33,851 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:33,852 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:33,854 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-21 00:14:33,854 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 00:14:33,856 INFO [Listener at localhost/41495] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-21 00:14:33,856 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-21 00:14:33,856 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove at org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.preflightChecks(AbstractStateMachineTableProcedure.java:163) at org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.(DisableTableProcedure.java:78) at org.apache.hadoop.hbase.master.HMaster$11.run(HMaster.java:2429) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.disableTable(HMaster.java:2413) at org.apache.hadoop.hbase.master.MasterRpcServices.disableTable(MasterRpcServices.java:787) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 00:14:33,857 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.CallRunner(144): callId: 918 service: MasterService methodName: DisableTable size: 88 connection: 172.31.14.131:44654 deadline: 1689898533856, exception=org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove 2023-07-21 00:14:33,857 DEBUG [Listener at localhost/41495] hbase.HBaseTestingUtility(1826): Table: Group_testDisabledTableMove already disabled, so just deleting it. 2023-07-21 00:14:33,858 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testDisabledTableMove 2023-07-21 00:14:33,859 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] procedure2.ProcedureExecutor(1029): Stored pid=146, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-21 00:14:33,861 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=146, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-21 00:14:33,861 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testDisabledTableMove' from rsgroup 'Group_testDisabledTableMove_557465426' 2023-07-21 00:14:33,861 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=146, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-21 00:14:33,863 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:33,863 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_557465426 2023-07-21 00:14:33,864 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:33,864 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 00:14:33,869 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testDisabledTableMove/ac3c447d15e4f378854e9e3eed38bb2a 2023-07-21 00:14:33,869 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testDisabledTableMove/bcf4af3afc0216f493476f340ee7a0c7 2023-07-21 00:14:33,869 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testDisabledTableMove/9df457790082a187198f42995536c1f5 2023-07-21 00:14:33,869 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testDisabledTableMove/392f1a4416425cf4e0efeb23580e18e8 2023-07-21 00:14:33,869 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testDisabledTableMove/ec0c72079c89634c290dd8677a352133 2023-07-21 00:14:33,870 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=146 2023-07-21 00:14:33,871 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testDisabledTableMove/ac3c447d15e4f378854e9e3eed38bb2a/f, FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testDisabledTableMove/ac3c447d15e4f378854e9e3eed38bb2a/recovered.edits] 2023-07-21 00:14:33,871 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testDisabledTableMove/bcf4af3afc0216f493476f340ee7a0c7/f, FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testDisabledTableMove/bcf4af3afc0216f493476f340ee7a0c7/recovered.edits] 2023-07-21 00:14:33,871 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testDisabledTableMove/ec0c72079c89634c290dd8677a352133/f, FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testDisabledTableMove/ec0c72079c89634c290dd8677a352133/recovered.edits] 2023-07-21 00:14:33,872 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testDisabledTableMove/392f1a4416425cf4e0efeb23580e18e8/f, FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testDisabledTableMove/392f1a4416425cf4e0efeb23580e18e8/recovered.edits] 2023-07-21 00:14:33,872 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testDisabledTableMove/9df457790082a187198f42995536c1f5/f, FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testDisabledTableMove/9df457790082a187198f42995536c1f5/recovered.edits] 2023-07-21 00:14:33,880 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testDisabledTableMove/ac3c447d15e4f378854e9e3eed38bb2a/recovered.edits/4.seqid to hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/archive/data/default/Group_testDisabledTableMove/ac3c447d15e4f378854e9e3eed38bb2a/recovered.edits/4.seqid 2023-07-21 00:14:33,881 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testDisabledTableMove/392f1a4416425cf4e0efeb23580e18e8/recovered.edits/4.seqid to hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/archive/data/default/Group_testDisabledTableMove/392f1a4416425cf4e0efeb23580e18e8/recovered.edits/4.seqid 2023-07-21 00:14:33,881 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testDisabledTableMove/ac3c447d15e4f378854e9e3eed38bb2a 2023-07-21 00:14:33,881 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testDisabledTableMove/bcf4af3afc0216f493476f340ee7a0c7/recovered.edits/4.seqid to hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/archive/data/default/Group_testDisabledTableMove/bcf4af3afc0216f493476f340ee7a0c7/recovered.edits/4.seqid 2023-07-21 00:14:33,881 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testDisabledTableMove/ec0c72079c89634c290dd8677a352133/recovered.edits/4.seqid to hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/archive/data/default/Group_testDisabledTableMove/ec0c72079c89634c290dd8677a352133/recovered.edits/4.seqid 2023-07-21 00:14:33,881 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testDisabledTableMove/9df457790082a187198f42995536c1f5/recovered.edits/4.seqid to hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/archive/data/default/Group_testDisabledTableMove/9df457790082a187198f42995536c1f5/recovered.edits/4.seqid 2023-07-21 00:14:33,882 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testDisabledTableMove/392f1a4416425cf4e0efeb23580e18e8 2023-07-21 00:14:33,882 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testDisabledTableMove/bcf4af3afc0216f493476f340ee7a0c7 2023-07-21 00:14:33,882 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testDisabledTableMove/ec0c72079c89634c290dd8677a352133 2023-07-21 00:14:33,882 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/.tmp/data/default/Group_testDisabledTableMove/9df457790082a187198f42995536c1f5 2023-07-21 00:14:33,882 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-21 00:14:33,888 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=146, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-21 00:14:33,890 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testDisabledTableMove from hbase:meta 2023-07-21 00:14:33,895 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_testDisabledTableMove' descriptor. 2023-07-21 00:14:33,897 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=146, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-21 00:14:33,897 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_testDisabledTableMove' from region states. 2023-07-21 00:14:33,897 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,,1689898472916.ec0c72079c89634c290dd8677a352133.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689898473897"}]},"ts":"9223372036854775807"} 2023-07-21 00:14:33,897 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,aaaaa,1689898472916.ac3c447d15e4f378854e9e3eed38bb2a.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689898473897"}]},"ts":"9223372036854775807"} 2023-07-21 00:14:33,897 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689898472916.392f1a4416425cf4e0efeb23580e18e8.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689898473897"}]},"ts":"9223372036854775807"} 2023-07-21 00:14:33,897 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689898472916.9df457790082a187198f42995536c1f5.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689898473897"}]},"ts":"9223372036854775807"} 2023-07-21 00:14:33,897 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,zzzzz,1689898472916.bcf4af3afc0216f493476f340ee7a0c7.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689898473897"}]},"ts":"9223372036854775807"} 2023-07-21 00:14:33,899 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-21 00:14:33,899 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => ec0c72079c89634c290dd8677a352133, NAME => 'Group_testDisabledTableMove,,1689898472916.ec0c72079c89634c290dd8677a352133.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => ac3c447d15e4f378854e9e3eed38bb2a, NAME => 'Group_testDisabledTableMove,aaaaa,1689898472916.ac3c447d15e4f378854e9e3eed38bb2a.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 392f1a4416425cf4e0efeb23580e18e8, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689898472916.392f1a4416425cf4e0efeb23580e18e8.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 9df457790082a187198f42995536c1f5, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689898472916.9df457790082a187198f42995536c1f5.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => bcf4af3afc0216f493476f340ee7a0c7, NAME => 'Group_testDisabledTableMove,zzzzz,1689898472916.bcf4af3afc0216f493476f340ee7a0c7.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-21 00:14:33,900 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_testDisabledTableMove' as deleted. 2023-07-21 00:14:33,900 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689898473900"}]},"ts":"9223372036854775807"} 2023-07-21 00:14:33,901 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_testDisabledTableMove state from META 2023-07-21 00:14:33,903 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=146, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-21 00:14:33,904 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=146, state=SUCCESS; DeleteTableProcedure table=Group_testDisabledTableMove in 45 msec 2023-07-21 00:14:33,971 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(1230): Checking to see if procedure is done pid=146 2023-07-21 00:14:33,971 INFO [Listener at localhost/41495] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testDisabledTableMove, procId: 146 completed 2023-07-21 00:14:33,974 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:33,975 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:33,975 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 00:14:33,976 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 00:14:33,976 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 00:14:33,976 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42163, jenkins-hbase4.apache.org:33545] to rsgroup default 2023-07-21 00:14:33,979 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:33,979 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_557465426 2023-07-21 00:14:33,979 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:33,980 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 00:14:33,981 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_557465426, current retry=0 2023-07-21 00:14:33,981 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33545,1689898450890, jenkins-hbase4.apache.org,42163,1689898450682] are moved back to Group_testDisabledTableMove_557465426 2023-07-21 00:14:33,981 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testDisabledTableMove_557465426 => default 2023-07-21 00:14:33,981 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 00:14:33,982 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testDisabledTableMove_557465426 2023-07-21 00:14:33,986 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:33,986 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:33,986 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 00:14:33,988 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 00:14:33,989 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 00:14:33,989 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 00:14:33,989 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 00:14:33,989 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 00:14:33,989 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 00:14:33,990 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 00:14:33,993 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:33,994 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 00:14:33,996 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 00:14:33,998 INFO [Listener at localhost/41495] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 00:14:33,999 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 00:14:34,001 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:34,001 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:34,002 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 00:14:34,004 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 00:14:34,006 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:34,006 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:34,008 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33855] to rsgroup master 2023-07-21 00:14:34,008 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 00:14:34,008 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.CallRunner(144): callId: 952 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:44654 deadline: 1689899674008, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. 2023-07-21 00:14:34,008 WARN [Listener at localhost/41495] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 00:14:34,010 INFO [Listener at localhost/41495] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:34,010 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:34,011 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:34,011 INFO [Listener at localhost/41495] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33545, jenkins-hbase4.apache.org:42163, jenkins-hbase4.apache.org:43987, jenkins-hbase4.apache.org:46101], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 00:14:34,011 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 00:14:34,011 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:34,031 INFO [Listener at localhost/41495] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=502 (was 500) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1997362576_17 at /127.0.0.1:60630 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5a6bf6db-shared-pool-21 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_968305499_17 at /127.0.0.1:33402 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7db6cade-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=766 (was 736) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=574 (was 574), ProcessCount=174 (was 174), AvailableMemoryMB=2668 (was 2690) 2023-07-21 00:14:34,032 WARN [Listener at localhost/41495] hbase.ResourceChecker(130): Thread=502 is superior to 500 2023-07-21 00:14:34,050 INFO [Listener at localhost/41495] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=502, OpenFileDescriptor=766, MaxFileDescriptor=60000, SystemLoadAverage=574, ProcessCount=174, AvailableMemoryMB=2667 2023-07-21 00:14:34,050 WARN [Listener at localhost/41495] hbase.ResourceChecker(130): Thread=502 is superior to 500 2023-07-21 00:14:34,051 INFO [Listener at localhost/41495] rsgroup.TestRSGroupsBase(132): testRSGroupListDoesNotContainFailedTableCreation 2023-07-21 00:14:34,054 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:34,054 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:34,055 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 00:14:34,055 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 00:14:34,055 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 00:14:34,056 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 00:14:34,056 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 00:14:34,056 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 00:14:34,062 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:34,063 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 00:14:34,064 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 00:14:34,067 INFO [Listener at localhost/41495] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 00:14:34,067 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 00:14:34,069 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:34,070 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:34,072 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 00:14:34,073 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 00:14:34,076 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:34,076 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:34,078 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33855] to rsgroup master 2023-07-21 00:14:34,078 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 00:14:34,079 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] ipc.CallRunner(144): callId: 980 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:44654 deadline: 1689899674078, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. 2023-07-21 00:14:34,079 WARN [Listener at localhost/41495] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33855 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 00:14:34,081 INFO [Listener at localhost/41495] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:34,081 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:34,081 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:34,082 INFO [Listener at localhost/41495] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33545, jenkins-hbase4.apache.org:42163, jenkins-hbase4.apache.org:43987, jenkins-hbase4.apache.org:46101], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 00:14:34,082 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 00:14:34,083 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33855] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:34,083 INFO [Listener at localhost/41495] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-21 00:14:34,083 INFO [Listener at localhost/41495] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-21 00:14:34,083 DEBUG [Listener at localhost/41495] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x48b9c2bd to 127.0.0.1:60276 2023-07-21 00:14:34,083 DEBUG [Listener at localhost/41495] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 00:14:34,084 DEBUG [Listener at localhost/41495] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-21 00:14:34,084 DEBUG [Listener at localhost/41495] util.JVMClusterUtil(257): Found active master hash=980031775, stopped=false 2023-07-21 00:14:34,085 DEBUG [Listener at localhost/41495] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-21 00:14:34,085 DEBUG [Listener at localhost/41495] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-21 00:14:34,085 INFO [Listener at localhost/41495] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,33855,1689898448530 2023-07-21 00:14:34,087 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): regionserver:33545-0x101853a75f20002, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 00:14:34,087 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): master:33855-0x101853a75f20000, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 00:14:34,087 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): regionserver:46101-0x101853a75f20003, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 00:14:34,087 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): regionserver:43987-0x101853a75f2000b, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 00:14:34,087 INFO [Listener at localhost/41495] procedure2.ProcedureExecutor(629): Stopping 2023-07-21 00:14:34,087 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): master:33855-0x101853a75f20000, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 00:14:34,087 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): regionserver:42163-0x101853a75f20001, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 00:14:34,088 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46101-0x101853a75f20003, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 00:14:34,088 DEBUG [Listener at localhost/41495] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x007d59e8 to 127.0.0.1:60276 2023-07-21 00:14:34,088 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:33855-0x101853a75f20000, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 00:14:34,088 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42163-0x101853a75f20001, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 00:14:34,088 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33545-0x101853a75f20002, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 00:14:34,088 DEBUG [Listener at localhost/41495] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 00:14:34,088 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43987-0x101853a75f2000b, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 00:14:34,088 INFO [Listener at localhost/41495] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,42163,1689898450682' ***** 2023-07-21 00:14:34,089 INFO [Listener at localhost/41495] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 00:14:34,089 INFO [Listener at localhost/41495] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,33545,1689898450890' ***** 2023-07-21 00:14:34,089 INFO [Listener at localhost/41495] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 00:14:34,089 INFO [RS:0;jenkins-hbase4:42163] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 00:14:34,089 INFO [Listener at localhost/41495] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,46101,1689898451098' ***** 2023-07-21 00:14:34,089 INFO [Listener at localhost/41495] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 00:14:34,089 INFO [RS:1;jenkins-hbase4:33545] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 00:14:34,089 INFO [Listener at localhost/41495] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,43987,1689898455241' ***** 2023-07-21 00:14:34,089 INFO [Listener at localhost/41495] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 00:14:34,089 INFO [RS:2;jenkins-hbase4:46101] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 00:14:34,093 INFO [RS:3;jenkins-hbase4:43987] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 00:14:34,102 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 00:14:34,104 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 00:14:34,102 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 00:14:34,103 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 00:14:34,103 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 00:14:34,110 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 00:14:34,110 INFO [RS:0;jenkins-hbase4:42163] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@681f0c80{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 00:14:34,110 INFO [RS:3;jenkins-hbase4:43987] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@16958b7c{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 00:14:34,110 INFO [RS:1;jenkins-hbase4:33545] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3000c365{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 00:14:34,110 INFO [RS:2;jenkins-hbase4:46101] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3a992b6f{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 00:14:34,115 INFO [RS:2;jenkins-hbase4:46101] server.AbstractConnector(383): Stopped ServerConnector@150955d2{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 00:14:34,115 INFO [RS:0;jenkins-hbase4:42163] server.AbstractConnector(383): Stopped ServerConnector@26730011{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 00:14:34,115 INFO [RS:2;jenkins-hbase4:46101] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 00:14:34,115 INFO [RS:0;jenkins-hbase4:42163] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 00:14:34,117 INFO [RS:2;jenkins-hbase4:46101] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3b4c0447{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 00:14:34,119 INFO [RS:0;jenkins-hbase4:42163] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3751ed89{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 00:14:34,119 INFO [RS:3;jenkins-hbase4:43987] server.AbstractConnector(383): Stopped ServerConnector@1d8aa3aa{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 00:14:34,119 INFO [RS:2;jenkins-hbase4:46101] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1b1103b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/hadoop.log.dir/,STOPPED} 2023-07-21 00:14:34,120 INFO [RS:0;jenkins-hbase4:42163] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@72780387{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/hadoop.log.dir/,STOPPED} 2023-07-21 00:14:34,120 INFO [RS:3;jenkins-hbase4:43987] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 00:14:34,122 INFO [RS:3;jenkins-hbase4:43987] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@68aba549{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 00:14:34,122 INFO [RS:1;jenkins-hbase4:33545] server.AbstractConnector(383): Stopped ServerConnector@44a214a1{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 00:14:34,122 INFO [RS:3;jenkins-hbase4:43987] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@f357cde{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/hadoop.log.dir/,STOPPED} 2023-07-21 00:14:34,123 INFO [RS:1;jenkins-hbase4:33545] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 00:14:34,125 INFO [RS:1;jenkins-hbase4:33545] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6e075be0{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 00:14:34,127 INFO [RS:3;jenkins-hbase4:43987] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 00:14:34,127 INFO [RS:0;jenkins-hbase4:42163] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 00:14:34,127 INFO [RS:0;jenkins-hbase4:42163] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 00:14:34,128 INFO [RS:0;jenkins-hbase4:42163] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 00:14:34,128 INFO [RS:1;jenkins-hbase4:33545] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4d9efc77{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/hadoop.log.dir/,STOPPED} 2023-07-21 00:14:34,127 INFO [RS:2;jenkins-hbase4:46101] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 00:14:34,127 INFO [RS:3;jenkins-hbase4:43987] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 00:14:34,127 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 00:14:34,129 INFO [RS:2;jenkins-hbase4:46101] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 00:14:34,129 INFO [RS:3;jenkins-hbase4:43987] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 00:14:34,128 INFO [RS:1;jenkins-hbase4:33545] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 00:14:34,128 INFO [RS:0;jenkins-hbase4:42163] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,42163,1689898450682 2023-07-21 00:14:34,129 INFO [RS:3;jenkins-hbase4:43987] regionserver.HRegionServer(3305): Received CLOSE for 914656f487f4e9f20596fe69207baae9 2023-07-21 00:14:34,129 DEBUG [RS:0;jenkins-hbase4:42163] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2eb7c2fc to 127.0.0.1:60276 2023-07-21 00:14:34,129 INFO [RS:2;jenkins-hbase4:46101] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 00:14:34,129 INFO [RS:2;jenkins-hbase4:46101] regionserver.HRegionServer(3305): Received CLOSE for 8569b93240f0e75794ec901e80f2b563 2023-07-21 00:14:34,129 DEBUG [RS:0;jenkins-hbase4:42163] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 00:14:34,130 INFO [RS:3;jenkins-hbase4:43987] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,43987,1689898455241 2023-07-21 00:14:34,129 INFO [RS:1;jenkins-hbase4:33545] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 00:14:34,130 DEBUG [RS:3;jenkins-hbase4:43987] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1ebd84fa to 127.0.0.1:60276 2023-07-21 00:14:34,130 INFO [RS:0;jenkins-hbase4:42163] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,42163,1689898450682; all regions closed. 2023-07-21 00:14:34,130 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 914656f487f4e9f20596fe69207baae9, disabling compactions & flushes 2023-07-21 00:14:34,130 INFO [RS:1;jenkins-hbase4:33545] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 00:14:34,130 DEBUG [RS:3;jenkins-hbase4:43987] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 00:14:34,130 INFO [RS:2;jenkins-hbase4:46101] regionserver.HRegionServer(3305): Received CLOSE for b1518e854a007c33a819dec51b94a3c0 2023-07-21 00:14:34,131 INFO [RS:3;jenkins-hbase4:43987] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-21 00:14:34,131 INFO [RS:2;jenkins-hbase4:46101] regionserver.HRegionServer(3305): Received CLOSE for 458c75b1350c6b08d5d38ff954236ee0 2023-07-21 00:14:34,131 INFO [RS:2;jenkins-hbase4:46101] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46101,1689898451098 2023-07-21 00:14:34,131 DEBUG [RS:2;jenkins-hbase4:46101] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1f0e270d to 127.0.0.1:60276 2023-07-21 00:14:34,131 DEBUG [RS:2;jenkins-hbase4:46101] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 00:14:34,131 INFO [RS:2;jenkins-hbase4:46101] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 00:14:34,131 INFO [RS:2;jenkins-hbase4:46101] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 00:14:34,131 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 00:14:34,131 INFO [RS:1;jenkins-hbase4:33545] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33545,1689898450890 2023-07-21 00:14:34,131 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689898467235.914656f487f4e9f20596fe69207baae9. 2023-07-21 00:14:34,130 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8569b93240f0e75794ec901e80f2b563, disabling compactions & flushes 2023-07-21 00:14:34,131 DEBUG [RS:1;jenkins-hbase4:33545] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x65facd7f to 127.0.0.1:60276 2023-07-21 00:14:34,131 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689898454088.8569b93240f0e75794ec901e80f2b563. 2023-07-21 00:14:34,132 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689898454088.8569b93240f0e75794ec901e80f2b563. 2023-07-21 00:14:34,132 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689898454088.8569b93240f0e75794ec901e80f2b563. after waiting 0 ms 2023-07-21 00:14:34,132 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689898454088.8569b93240f0e75794ec901e80f2b563. 2023-07-21 00:14:34,132 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 8569b93240f0e75794ec901e80f2b563 1/1 column families, dataSize=28.43 KB heapSize=46.79 KB 2023-07-21 00:14:34,131 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689898467235.914656f487f4e9f20596fe69207baae9. 2023-07-21 00:14:34,131 INFO [RS:2;jenkins-hbase4:46101] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 00:14:34,133 INFO [RS:2;jenkins-hbase4:46101] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-21 00:14:34,131 DEBUG [RS:3;jenkins-hbase4:43987] regionserver.HRegionServer(1478): Online Regions={914656f487f4e9f20596fe69207baae9=testRename,,1689898467235.914656f487f4e9f20596fe69207baae9.} 2023-07-21 00:14:34,133 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689898467235.914656f487f4e9f20596fe69207baae9. after waiting 0 ms 2023-07-21 00:14:34,131 DEBUG [RS:1;jenkins-hbase4:33545] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 00:14:34,134 INFO [RS:1;jenkins-hbase4:33545] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33545,1689898450890; all regions closed. 2023-07-21 00:14:34,134 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689898467235.914656f487f4e9f20596fe69207baae9. 2023-07-21 00:14:34,330 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-21 00:14:34,330 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-21 00:14:34,331 INFO [RS:2;jenkins-hbase4:46101] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-21 00:14:34,331 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-21 00:14:34,332 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-21 00:14:34,331 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-21 00:14:34,332 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-21 00:14:34,332 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 00:14:34,332 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 00:14:34,332 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 00:14:34,332 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 00:14:34,332 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 00:14:34,332 DEBUG [RS:2;jenkins-hbase4:46101] regionserver.HRegionServer(1478): Online Regions={8569b93240f0e75794ec901e80f2b563=hbase:rsgroup,,1689898454088.8569b93240f0e75794ec901e80f2b563., b1518e854a007c33a819dec51b94a3c0=hbase:namespace,,1689898454045.b1518e854a007c33a819dec51b94a3c0., 1588230740=hbase:meta,,1.1588230740, 458c75b1350c6b08d5d38ff954236ee0=unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0.} 2023-07-21 00:14:34,334 DEBUG [RS:3;jenkins-hbase4:43987] regionserver.HRegionServer(1504): Waiting on 914656f487f4e9f20596fe69207baae9 2023-07-21 00:14:34,334 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=76.40 KB heapSize=120.38 KB 2023-07-21 00:14:34,334 DEBUG [RS:2;jenkins-hbase4:46101] regionserver.HRegionServer(1504): Waiting on 1588230740, 458c75b1350c6b08d5d38ff954236ee0, 8569b93240f0e75794ec901e80f2b563, b1518e854a007c33a819dec51b94a3c0 2023-07-21 00:14:34,345 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/testRename/914656f487f4e9f20596fe69207baae9/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-21 00:14:34,346 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689898467235.914656f487f4e9f20596fe69207baae9. 2023-07-21 00:14:34,346 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 914656f487f4e9f20596fe69207baae9: 2023-07-21 00:14:34,346 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed testRename,,1689898467235.914656f487f4e9f20596fe69207baae9. 2023-07-21 00:14:34,359 DEBUG [RS:0;jenkins-hbase4:42163] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/oldWALs 2023-07-21 00:14:34,359 INFO [RS:0;jenkins-hbase4:42163] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C42163%2C1689898450682:(num 1689898453390) 2023-07-21 00:14:34,359 DEBUG [RS:0;jenkins-hbase4:42163] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 00:14:34,359 INFO [RS:0;jenkins-hbase4:42163] regionserver.LeaseManager(133): Closed leases 2023-07-21 00:14:34,359 INFO [RS:0;jenkins-hbase4:42163] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 00:14:34,360 INFO [RS:0;jenkins-hbase4:42163] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 00:14:34,360 INFO [RS:0;jenkins-hbase4:42163] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 00:14:34,360 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 00:14:34,360 INFO [RS:0;jenkins-hbase4:42163] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 00:14:34,364 INFO [RS:0;jenkins-hbase4:42163] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:42163 2023-07-21 00:14:34,371 DEBUG [RS:1;jenkins-hbase4:33545] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/oldWALs 2023-07-21 00:14:34,371 INFO [RS:1;jenkins-hbase4:33545] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C33545%2C1689898450890:(num 1689898453390) 2023-07-21 00:14:34,371 DEBUG [RS:1;jenkins-hbase4:33545] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 00:14:34,371 INFO [RS:1;jenkins-hbase4:33545] regionserver.LeaseManager(133): Closed leases 2023-07-21 00:14:34,371 INFO [RS:1;jenkins-hbase4:33545] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 00:14:34,372 INFO [RS:1;jenkins-hbase4:33545] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 00:14:34,373 INFO [RS:1;jenkins-hbase4:33545] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 00:14:34,373 INFO [RS:1;jenkins-hbase4:33545] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 00:14:34,373 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 00:14:34,374 INFO [RS:1;jenkins-hbase4:33545] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33545 2023-07-21 00:14:34,433 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): master:33855-0x101853a75f20000, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 00:14:34,433 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): regionserver:42163-0x101853a75f20001, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33545,1689898450890 2023-07-21 00:14:34,433 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): regionserver:43987-0x101853a75f2000b, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33545,1689898450890 2023-07-21 00:14:34,433 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): regionserver:43987-0x101853a75f2000b, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 00:14:34,433 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): regionserver:43987-0x101853a75f2000b, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42163,1689898450682 2023-07-21 00:14:34,433 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): regionserver:42163-0x101853a75f20001, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 00:14:34,434 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): regionserver:42163-0x101853a75f20001, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42163,1689898450682 2023-07-21 00:14:34,433 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): regionserver:33545-0x101853a75f20002, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33545,1689898450890 2023-07-21 00:14:34,433 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): regionserver:46101-0x101853a75f20003, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33545,1689898450890 2023-07-21 00:14:34,434 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): regionserver:33545-0x101853a75f20002, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 00:14:34,434 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): regionserver:46101-0x101853a75f20003, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 00:14:34,434 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): regionserver:33545-0x101853a75f20002, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42163,1689898450682 2023-07-21 00:14:34,434 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): regionserver:46101-0x101853a75f20003, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42163,1689898450682 2023-07-21 00:14:34,435 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,33545,1689898450890] 2023-07-21 00:14:34,436 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,33545,1689898450890; numProcessing=1 2023-07-21 00:14:34,438 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,33545,1689898450890 already deleted, retry=false 2023-07-21 00:14:34,438 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,33545,1689898450890 expired; onlineServers=3 2023-07-21 00:14:34,438 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,42163,1689898450682] 2023-07-21 00:14:34,439 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,42163,1689898450682; numProcessing=2 2023-07-21 00:14:34,440 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,42163,1689898450682 already deleted, retry=false 2023-07-21 00:14:34,440 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,42163,1689898450682 expired; onlineServers=2 2023-07-21 00:14:34,497 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=70.42 KB at sequenceid=196 (bloomFilter=false), to=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/hbase/meta/1588230740/.tmp/info/619ef187c6524d72aa52072e97bb9a4e 2023-07-21 00:14:34,499 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=28.43 KB at sequenceid=95 (bloomFilter=true), to=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/hbase/rsgroup/8569b93240f0e75794ec901e80f2b563/.tmp/m/a7bb74e7d810473b8628618e3d3f8fff 2023-07-21 00:14:34,520 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-21 00:14:34,520 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-21 00:14:34,535 INFO [RS:3;jenkins-hbase4:43987] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,43987,1689898455241; all regions closed. 2023-07-21 00:14:34,536 DEBUG [RS:2;jenkins-hbase4:46101] regionserver.HRegionServer(1504): Waiting on 1588230740, 458c75b1350c6b08d5d38ff954236ee0, 8569b93240f0e75794ec901e80f2b563, b1518e854a007c33a819dec51b94a3c0 2023-07-21 00:14:34,541 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a7bb74e7d810473b8628618e3d3f8fff 2023-07-21 00:14:34,541 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 619ef187c6524d72aa52072e97bb9a4e 2023-07-21 00:14:34,558 DEBUG [RS:3;jenkins-hbase4:43987] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/oldWALs 2023-07-21 00:14:34,558 INFO [RS:3;jenkins-hbase4:43987] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C43987%2C1689898455241:(num 1689898455661) 2023-07-21 00:14:34,558 DEBUG [RS:3;jenkins-hbase4:43987] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 00:14:34,559 INFO [RS:3;jenkins-hbase4:43987] regionserver.LeaseManager(133): Closed leases 2023-07-21 00:14:34,559 INFO [RS:3;jenkins-hbase4:43987] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 00:14:34,559 INFO [RS:3;jenkins-hbase4:43987] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 00:14:34,559 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 00:14:34,559 INFO [RS:3;jenkins-hbase4:43987] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 00:14:34,559 INFO [RS:3;jenkins-hbase4:43987] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 00:14:34,560 INFO [RS:3;jenkins-hbase4:43987] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:43987 2023-07-21 00:14:34,562 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): regionserver:46101-0x101853a75f20003, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43987,1689898455241 2023-07-21 00:14:34,562 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): master:33855-0x101853a75f20000, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 00:14:34,562 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): regionserver:43987-0x101853a75f2000b, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43987,1689898455241 2023-07-21 00:14:34,563 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,43987,1689898455241] 2023-07-21 00:14:34,564 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,43987,1689898455241; numProcessing=3 2023-07-21 00:14:34,566 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/hbase/rsgroup/8569b93240f0e75794ec901e80f2b563/.tmp/m/a7bb74e7d810473b8628618e3d3f8fff as hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/hbase/rsgroup/8569b93240f0e75794ec901e80f2b563/m/a7bb74e7d810473b8628618e3d3f8fff 2023-07-21 00:14:34,567 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,43987,1689898455241 already deleted, retry=false 2023-07-21 00:14:34,567 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,43987,1689898455241 expired; onlineServers=1 2023-07-21 00:14:34,580 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a7bb74e7d810473b8628618e3d3f8fff 2023-07-21 00:14:34,580 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/hbase/rsgroup/8569b93240f0e75794ec901e80f2b563/m/a7bb74e7d810473b8628618e3d3f8fff, entries=28, sequenceid=95, filesize=6.1 K 2023-07-21 00:14:34,582 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~28.43 KB/29115, heapSize ~46.77 KB/47896, currentSize=0 B/0 for 8569b93240f0e75794ec901e80f2b563 in 450ms, sequenceid=95, compaction requested=false 2023-07-21 00:14:34,586 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2 KB at sequenceid=196 (bloomFilter=false), to=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/hbase/meta/1588230740/.tmp/rep_barrier/66523c7226e64761ae9560a226f6aba5 2023-07-21 00:14:34,595 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/hbase/rsgroup/8569b93240f0e75794ec901e80f2b563/recovered.edits/98.seqid, newMaxSeqId=98, maxSeqId=1 2023-07-21 00:14:34,596 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 00:14:34,597 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689898454088.8569b93240f0e75794ec901e80f2b563. 2023-07-21 00:14:34,597 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 66523c7226e64761ae9560a226f6aba5 2023-07-21 00:14:34,597 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8569b93240f0e75794ec901e80f2b563: 2023-07-21 00:14:34,597 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689898454088.8569b93240f0e75794ec901e80f2b563. 2023-07-21 00:14:34,597 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b1518e854a007c33a819dec51b94a3c0, disabling compactions & flushes 2023-07-21 00:14:34,597 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689898454045.b1518e854a007c33a819dec51b94a3c0. 2023-07-21 00:14:34,597 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689898454045.b1518e854a007c33a819dec51b94a3c0. 2023-07-21 00:14:34,597 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689898454045.b1518e854a007c33a819dec51b94a3c0. after waiting 0 ms 2023-07-21 00:14:34,597 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689898454045.b1518e854a007c33a819dec51b94a3c0. 2023-07-21 00:14:34,597 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing b1518e854a007c33a819dec51b94a3c0 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-21 00:14:34,623 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/hbase/namespace/b1518e854a007c33a819dec51b94a3c0/.tmp/info/0914dbfa12b942a098b27cbd149c6614 2023-07-21 00:14:34,623 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.99 KB at sequenceid=196 (bloomFilter=false), to=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/hbase/meta/1588230740/.tmp/table/c6639f18659a4e4c921fa540f840a27b 2023-07-21 00:14:34,631 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c6639f18659a4e4c921fa540f840a27b 2023-07-21 00:14:34,631 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/hbase/namespace/b1518e854a007c33a819dec51b94a3c0/.tmp/info/0914dbfa12b942a098b27cbd149c6614 as hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/hbase/namespace/b1518e854a007c33a819dec51b94a3c0/info/0914dbfa12b942a098b27cbd149c6614 2023-07-21 00:14:34,632 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/hbase/meta/1588230740/.tmp/info/619ef187c6524d72aa52072e97bb9a4e as hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/hbase/meta/1588230740/info/619ef187c6524d72aa52072e97bb9a4e 2023-07-21 00:14:34,638 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/hbase/namespace/b1518e854a007c33a819dec51b94a3c0/info/0914dbfa12b942a098b27cbd149c6614, entries=2, sequenceid=6, filesize=4.8 K 2023-07-21 00:14:34,639 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for b1518e854a007c33a819dec51b94a3c0 in 42ms, sequenceid=6, compaction requested=false 2023-07-21 00:14:34,640 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 619ef187c6524d72aa52072e97bb9a4e 2023-07-21 00:14:34,641 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/hbase/meta/1588230740/info/619ef187c6524d72aa52072e97bb9a4e, entries=92, sequenceid=196, filesize=15.3 K 2023-07-21 00:14:34,644 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/hbase/meta/1588230740/.tmp/rep_barrier/66523c7226e64761ae9560a226f6aba5 as hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/hbase/meta/1588230740/rep_barrier/66523c7226e64761ae9560a226f6aba5 2023-07-21 00:14:34,646 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/hbase/namespace/b1518e854a007c33a819dec51b94a3c0/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-21 00:14:34,647 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689898454045.b1518e854a007c33a819dec51b94a3c0. 2023-07-21 00:14:34,647 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b1518e854a007c33a819dec51b94a3c0: 2023-07-21 00:14:34,647 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689898454045.b1518e854a007c33a819dec51b94a3c0. 2023-07-21 00:14:34,648 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 458c75b1350c6b08d5d38ff954236ee0, disabling compactions & flushes 2023-07-21 00:14:34,648 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0. 2023-07-21 00:14:34,648 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0. 2023-07-21 00:14:34,648 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0. after waiting 0 ms 2023-07-21 00:14:34,648 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0. 2023-07-21 00:14:34,651 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/default/unmovedTable/458c75b1350c6b08d5d38ff954236ee0/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-21 00:14:34,651 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 66523c7226e64761ae9560a226f6aba5 2023-07-21 00:14:34,652 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/hbase/meta/1588230740/rep_barrier/66523c7226e64761ae9560a226f6aba5, entries=18, sequenceid=196, filesize=6.9 K 2023-07-21 00:14:34,652 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0. 2023-07-21 00:14:34,652 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 458c75b1350c6b08d5d38ff954236ee0: 2023-07-21 00:14:34,652 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed unmovedTable,,1689898468921.458c75b1350c6b08d5d38ff954236ee0. 2023-07-21 00:14:34,652 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/hbase/meta/1588230740/.tmp/table/c6639f18659a4e4c921fa540f840a27b as hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/hbase/meta/1588230740/table/c6639f18659a4e4c921fa540f840a27b 2023-07-21 00:14:34,658 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c6639f18659a4e4c921fa540f840a27b 2023-07-21 00:14:34,658 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/hbase/meta/1588230740/table/c6639f18659a4e4c921fa540f840a27b, entries=31, sequenceid=196, filesize=7.4 K 2023-07-21 00:14:34,659 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~76.40 KB/78237, heapSize ~120.34 KB/123224, currentSize=0 B/0 for 1588230740 in 327ms, sequenceid=196, compaction requested=false 2023-07-21 00:14:34,669 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/data/hbase/meta/1588230740/recovered.edits/199.seqid, newMaxSeqId=199, maxSeqId=1 2023-07-21 00:14:34,669 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 00:14:34,670 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 00:14:34,670 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 00:14:34,670 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-21 00:14:34,735 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): regionserver:43987-0x101853a75f2000b, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 00:14:34,735 INFO [RS:3;jenkins-hbase4:43987] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,43987,1689898455241; zookeeper connection closed. 2023-07-21 00:14:34,735 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): regionserver:43987-0x101853a75f2000b, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 00:14:34,736 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@63da13f8] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@63da13f8 2023-07-21 00:14:34,736 INFO [RS:2;jenkins-hbase4:46101] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46101,1689898451098; all regions closed. 2023-07-21 00:14:34,742 DEBUG [RS:2;jenkins-hbase4:46101] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/oldWALs 2023-07-21 00:14:34,742 INFO [RS:2;jenkins-hbase4:46101] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46101%2C1689898451098.meta:.meta(num 1689898453544) 2023-07-21 00:14:34,748 DEBUG [RS:2;jenkins-hbase4:46101] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/oldWALs 2023-07-21 00:14:34,749 INFO [RS:2;jenkins-hbase4:46101] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46101%2C1689898451098:(num 1689898453390) 2023-07-21 00:14:34,749 DEBUG [RS:2;jenkins-hbase4:46101] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 00:14:34,749 INFO [RS:2;jenkins-hbase4:46101] regionserver.LeaseManager(133): Closed leases 2023-07-21 00:14:34,749 INFO [RS:2;jenkins-hbase4:46101] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 00:14:34,749 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 00:14:34,750 INFO [RS:2;jenkins-hbase4:46101] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46101 2023-07-21 00:14:34,755 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): regionserver:46101-0x101853a75f20003, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46101,1689898451098 2023-07-21 00:14:34,755 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): master:33855-0x101853a75f20000, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 00:14:34,756 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,46101,1689898451098] 2023-07-21 00:14:34,756 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,46101,1689898451098; numProcessing=4 2023-07-21 00:14:34,757 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,46101,1689898451098 already deleted, retry=false 2023-07-21 00:14:34,758 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,46101,1689898451098 expired; onlineServers=0 2023-07-21 00:14:34,758 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,33855,1689898448530' ***** 2023-07-21 00:14:34,758 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-21 00:14:34,758 DEBUG [M:0;jenkins-hbase4:33855] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7b571324, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-21 00:14:34,759 INFO [M:0;jenkins-hbase4:33855] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 00:14:34,761 INFO [M:0;jenkins-hbase4:33855] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@96a2503{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-21 00:14:34,762 INFO [M:0;jenkins-hbase4:33855] server.AbstractConnector(383): Stopped ServerConnector@49aa43f{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 00:14:34,762 INFO [M:0;jenkins-hbase4:33855] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 00:14:34,762 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): master:33855-0x101853a75f20000, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-21 00:14:34,762 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): master:33855-0x101853a75f20000, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 00:14:34,762 INFO [M:0;jenkins-hbase4:33855] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5b89ffdd{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 00:14:34,763 INFO [M:0;jenkins-hbase4:33855] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3ec3711a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/hadoop.log.dir/,STOPPED} 2023-07-21 00:14:34,763 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:33855-0x101853a75f20000, quorum=127.0.0.1:60276, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 00:14:34,763 INFO [M:0;jenkins-hbase4:33855] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33855,1689898448530 2023-07-21 00:14:34,763 INFO [M:0;jenkins-hbase4:33855] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33855,1689898448530; all regions closed. 2023-07-21 00:14:34,763 DEBUG [M:0;jenkins-hbase4:33855] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 00:14:34,763 INFO [M:0;jenkins-hbase4:33855] master.HMaster(1491): Stopping master jetty server 2023-07-21 00:14:34,764 INFO [M:0;jenkins-hbase4:33855] server.AbstractConnector(383): Stopped ServerConnector@71c2565{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 00:14:34,764 DEBUG [M:0;jenkins-hbase4:33855] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-21 00:14:34,765 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-21 00:14:34,765 DEBUG [M:0;jenkins-hbase4:33855] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-21 00:14:34,765 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689898452928] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689898452928,5,FailOnTimeoutGroup] 2023-07-21 00:14:34,765 INFO [M:0;jenkins-hbase4:33855] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-21 00:14:34,765 INFO [M:0;jenkins-hbase4:33855] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-21 00:14:34,765 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689898452928] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689898452928,5,FailOnTimeoutGroup] 2023-07-21 00:14:34,765 INFO [M:0;jenkins-hbase4:33855] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-21 00:14:34,765 DEBUG [M:0;jenkins-hbase4:33855] master.HMaster(1512): Stopping service threads 2023-07-21 00:14:34,765 INFO [M:0;jenkins-hbase4:33855] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-21 00:14:34,765 ERROR [M:0;jenkins-hbase4:33855] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-21 00:14:34,766 INFO [M:0;jenkins-hbase4:33855] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-21 00:14:34,766 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-21 00:14:34,766 DEBUG [M:0;jenkins-hbase4:33855] zookeeper.ZKUtil(398): master:33855-0x101853a75f20000, quorum=127.0.0.1:60276, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-21 00:14:34,767 WARN [M:0;jenkins-hbase4:33855] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-21 00:14:34,767 INFO [M:0;jenkins-hbase4:33855] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-21 00:14:34,767 INFO [M:0;jenkins-hbase4:33855] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-21 00:14:34,767 DEBUG [M:0;jenkins-hbase4:33855] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-21 00:14:34,767 INFO [M:0;jenkins-hbase4:33855] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 00:14:34,767 DEBUG [M:0;jenkins-hbase4:33855] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 00:14:34,767 DEBUG [M:0;jenkins-hbase4:33855] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-21 00:14:34,767 DEBUG [M:0;jenkins-hbase4:33855] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 00:14:34,767 INFO [M:0;jenkins-hbase4:33855] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=490.38 KB heapSize=586.30 KB 2023-07-21 00:14:34,784 INFO [M:0;jenkins-hbase4:33855] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=490.38 KB at sequenceid=1080 (bloomFilter=true), to=hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/47150ccd03a04c54b5d249e0cb5c4153 2023-07-21 00:14:34,791 DEBUG [M:0;jenkins-hbase4:33855] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/47150ccd03a04c54b5d249e0cb5c4153 as hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/47150ccd03a04c54b5d249e0cb5c4153 2023-07-21 00:14:34,797 INFO [M:0;jenkins-hbase4:33855] regionserver.HStore(1080): Added hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/47150ccd03a04c54b5d249e0cb5c4153, entries=145, sequenceid=1080, filesize=25.7 K 2023-07-21 00:14:34,798 INFO [M:0;jenkins-hbase4:33855] regionserver.HRegion(2948): Finished flush of dataSize ~490.38 KB/502152, heapSize ~586.29 KB/600360, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 31ms, sequenceid=1080, compaction requested=false 2023-07-21 00:14:34,800 INFO [M:0;jenkins-hbase4:33855] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 00:14:34,800 DEBUG [M:0;jenkins-hbase4:33855] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 00:14:34,811 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 00:14:34,811 INFO [M:0;jenkins-hbase4:33855] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-21 00:14:34,811 INFO [M:0;jenkins-hbase4:33855] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33855 2023-07-21 00:14:34,813 DEBUG [M:0;jenkins-hbase4:33855] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,33855,1689898448530 already deleted, retry=false 2023-07-21 00:14:34,832 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-21 00:14:34,835 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): regionserver:33545-0x101853a75f20002, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 00:14:34,836 INFO [RS:1;jenkins-hbase4:33545] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33545,1689898450890; zookeeper connection closed. 2023-07-21 00:14:34,836 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): regionserver:33545-0x101853a75f20002, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 00:14:34,836 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@23f278c5] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@23f278c5 2023-07-21 00:14:34,936 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): regionserver:42163-0x101853a75f20001, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 00:14:34,936 INFO [RS:0;jenkins-hbase4:42163] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,42163,1689898450682; zookeeper connection closed. 2023-07-21 00:14:34,936 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): regionserver:42163-0x101853a75f20001, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 00:14:34,936 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@17a9cd5d] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@17a9cd5d 2023-07-21 00:14:35,036 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): master:33855-0x101853a75f20000, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 00:14:35,036 INFO [M:0;jenkins-hbase4:33855] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33855,1689898448530; zookeeper connection closed. 2023-07-21 00:14:35,036 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): master:33855-0x101853a75f20000, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 00:14:35,136 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): regionserver:46101-0x101853a75f20003, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 00:14:35,136 INFO [RS:2;jenkins-hbase4:46101] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46101,1689898451098; zookeeper connection closed. 2023-07-21 00:14:35,136 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): regionserver:46101-0x101853a75f20003, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 00:14:35,137 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5741ddb2] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5741ddb2 2023-07-21 00:14:35,137 INFO [Listener at localhost/41495] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-21 00:14:35,137 WARN [Listener at localhost/41495] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 00:14:35,143 INFO [Listener at localhost/41495] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 00:14:35,247 WARN [BP-668481516-172.31.14.131-1689898444457 heartbeating to localhost/127.0.0.1:36751] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 00:14:35,247 WARN [BP-668481516-172.31.14.131-1689898444457 heartbeating to localhost/127.0.0.1:36751] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-668481516-172.31.14.131-1689898444457 (Datanode Uuid fbe2f6f3-31c9-46ac-bb54-bf23bf6d370f) service to localhost/127.0.0.1:36751 2023-07-21 00:14:35,249 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/cluster_7bdf7489-55f0-e02d-1b93-10dd76f66c48/dfs/data/data5/current/BP-668481516-172.31.14.131-1689898444457] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 00:14:35,249 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/cluster_7bdf7489-55f0-e02d-1b93-10dd76f66c48/dfs/data/data6/current/BP-668481516-172.31.14.131-1689898444457] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 00:14:35,252 WARN [Listener at localhost/41495] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 00:14:35,261 INFO [Listener at localhost/41495] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 00:14:35,365 WARN [BP-668481516-172.31.14.131-1689898444457 heartbeating to localhost/127.0.0.1:36751] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 00:14:35,365 WARN [BP-668481516-172.31.14.131-1689898444457 heartbeating to localhost/127.0.0.1:36751] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-668481516-172.31.14.131-1689898444457 (Datanode Uuid a899477f-5274-4fd3-8839-f328942b70f1) service to localhost/127.0.0.1:36751 2023-07-21 00:14:35,366 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/cluster_7bdf7489-55f0-e02d-1b93-10dd76f66c48/dfs/data/data3/current/BP-668481516-172.31.14.131-1689898444457] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 00:14:35,367 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/cluster_7bdf7489-55f0-e02d-1b93-10dd76f66c48/dfs/data/data4/current/BP-668481516-172.31.14.131-1689898444457] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 00:14:35,375 WARN [Listener at localhost/41495] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 00:14:35,389 INFO [Listener at localhost/41495] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 00:14:35,497 WARN [BP-668481516-172.31.14.131-1689898444457 heartbeating to localhost/127.0.0.1:36751] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 00:14:35,497 WARN [BP-668481516-172.31.14.131-1689898444457 heartbeating to localhost/127.0.0.1:36751] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-668481516-172.31.14.131-1689898444457 (Datanode Uuid 80b2fc5a-5cc3-4abc-8604-37c287e2d8f2) service to localhost/127.0.0.1:36751 2023-07-21 00:14:35,497 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/cluster_7bdf7489-55f0-e02d-1b93-10dd76f66c48/dfs/data/data1/current/BP-668481516-172.31.14.131-1689898444457] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 00:14:35,498 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/cluster_7bdf7489-55f0-e02d-1b93-10dd76f66c48/dfs/data/data2/current/BP-668481516-172.31.14.131-1689898444457] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 00:14:35,535 INFO [Listener at localhost/41495] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 00:14:35,662 INFO [Listener at localhost/41495] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-21 00:14:35,721 INFO [Listener at localhost/41495] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-21 00:14:35,721 INFO [Listener at localhost/41495] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-21 00:14:35,721 INFO [Listener at localhost/41495] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/hadoop.log.dir so I do NOT create it in target/test-data/3eb20d03-240b-b752-ba9a-24012c7c801e 2023-07-21 00:14:35,721 INFO [Listener at localhost/41495] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/69c3a257-51b5-f17f-7be6-55f6527f2b09/hadoop.tmp.dir so I do NOT create it in target/test-data/3eb20d03-240b-b752-ba9a-24012c7c801e 2023-07-21 00:14:35,721 INFO [Listener at localhost/41495] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3eb20d03-240b-b752-ba9a-24012c7c801e/cluster_7e7efc1e-9bda-4757-7960-9be15d49d74a, deleteOnExit=true 2023-07-21 00:14:35,721 INFO [Listener at localhost/41495] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-21 00:14:35,721 INFO [Listener at localhost/41495] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3eb20d03-240b-b752-ba9a-24012c7c801e/test.cache.data in system properties and HBase conf 2023-07-21 00:14:35,721 INFO [Listener at localhost/41495] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3eb20d03-240b-b752-ba9a-24012c7c801e/hadoop.tmp.dir in system properties and HBase conf 2023-07-21 00:14:35,721 INFO [Listener at localhost/41495] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3eb20d03-240b-b752-ba9a-24012c7c801e/hadoop.log.dir in system properties and HBase conf 2023-07-21 00:14:35,722 INFO [Listener at localhost/41495] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3eb20d03-240b-b752-ba9a-24012c7c801e/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-21 00:14:35,722 INFO [Listener at localhost/41495] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3eb20d03-240b-b752-ba9a-24012c7c801e/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-21 00:14:35,722 INFO [Listener at localhost/41495] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-21 00:14:35,722 DEBUG [Listener at localhost/41495] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-21 00:14:35,722 INFO [Listener at localhost/41495] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3eb20d03-240b-b752-ba9a-24012c7c801e/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-21 00:14:35,722 INFO [Listener at localhost/41495] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3eb20d03-240b-b752-ba9a-24012c7c801e/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-21 00:14:35,722 INFO [Listener at localhost/41495] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3eb20d03-240b-b752-ba9a-24012c7c801e/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-21 00:14:35,722 INFO [Listener at localhost/41495] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3eb20d03-240b-b752-ba9a-24012c7c801e/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-21 00:14:35,723 INFO [Listener at localhost/41495] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3eb20d03-240b-b752-ba9a-24012c7c801e/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-21 00:14:35,723 INFO [Listener at localhost/41495] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3eb20d03-240b-b752-ba9a-24012c7c801e/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-21 00:14:35,723 INFO [Listener at localhost/41495] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3eb20d03-240b-b752-ba9a-24012c7c801e/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-21 00:14:35,723 INFO [Listener at localhost/41495] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3eb20d03-240b-b752-ba9a-24012c7c801e/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-21 00:14:35,723 INFO [Listener at localhost/41495] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3eb20d03-240b-b752-ba9a-24012c7c801e/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-21 00:14:35,723 INFO [Listener at localhost/41495] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3eb20d03-240b-b752-ba9a-24012c7c801e/nfs.dump.dir in system properties and HBase conf 2023-07-21 00:14:35,723 INFO [Listener at localhost/41495] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3eb20d03-240b-b752-ba9a-24012c7c801e/java.io.tmpdir in system properties and HBase conf 2023-07-21 00:14:35,723 INFO [Listener at localhost/41495] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3eb20d03-240b-b752-ba9a-24012c7c801e/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-21 00:14:35,723 INFO [Listener at localhost/41495] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3eb20d03-240b-b752-ba9a-24012c7c801e/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-21 00:14:35,723 INFO [Listener at localhost/41495] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3eb20d03-240b-b752-ba9a-24012c7c801e/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-21 00:14:35,728 WARN [Listener at localhost/41495] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-21 00:14:35,728 WARN [Listener at localhost/41495] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-21 00:14:35,754 DEBUG [Listener at localhost/41495-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x101853a75f2000a, quorum=127.0.0.1:60276, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-21 00:14:35,754 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x101853a75f2000a, quorum=127.0.0.1:60276, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-21 00:14:35,776 WARN [Listener at localhost/41495] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 00:14:35,778 INFO [Listener at localhost/41495] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 00:14:35,783 INFO [Listener at localhost/41495] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3eb20d03-240b-b752-ba9a-24012c7c801e/java.io.tmpdir/Jetty_localhost_41611_hdfs____.mp8659/webapp 2023-07-21 00:14:35,888 INFO [Listener at localhost/41495] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41611 2023-07-21 00:14:35,893 WARN [Listener at localhost/41495] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-21 00:14:35,894 WARN [Listener at localhost/41495] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-21 00:14:36,010 WARN [Listener at localhost/42959] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 00:14:36,067 WARN [Listener at localhost/42959] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 00:14:36,074 WARN [Listener at localhost/42959] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 00:14:36,076 INFO [Listener at localhost/42959] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 00:14:36,085 INFO [Listener at localhost/42959] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3eb20d03-240b-b752-ba9a-24012c7c801e/java.io.tmpdir/Jetty_localhost_35973_datanode____stdpy2/webapp 2023-07-21 00:14:36,220 INFO [Listener at localhost/42959] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35973 2023-07-21 00:14:36,230 WARN [Listener at localhost/40203] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 00:14:36,297 WARN [Listener at localhost/40203] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 00:14:36,309 WARN [Listener at localhost/40203] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 00:14:36,311 INFO [Listener at localhost/40203] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 00:14:36,318 INFO [Listener at localhost/40203] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3eb20d03-240b-b752-ba9a-24012c7c801e/java.io.tmpdir/Jetty_localhost_38443_datanode____.p26hnv/webapp 2023-07-21 00:14:36,437 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xcb3b90d536f1bd8d: Processing first storage report for DS-645ae859-8a7f-4034-aa18-66fc096b91fa from datanode 45e5e002-240b-4d3b-94c7-cf3a628b8284 2023-07-21 00:14:36,438 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xcb3b90d536f1bd8d: from storage DS-645ae859-8a7f-4034-aa18-66fc096b91fa node DatanodeRegistration(127.0.0.1:33851, datanodeUuid=45e5e002-240b-4d3b-94c7-cf3a628b8284, infoPort=46127, infoSecurePort=0, ipcPort=40203, storageInfo=lv=-57;cid=testClusterID;nsid=709119220;c=1689898475731), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 00:14:36,438 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xcb3b90d536f1bd8d: Processing first storage report for DS-8fe384f0-dc94-4ba2-a11a-7e1e03d7dc7e from datanode 45e5e002-240b-4d3b-94c7-cf3a628b8284 2023-07-21 00:14:36,438 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xcb3b90d536f1bd8d: from storage DS-8fe384f0-dc94-4ba2-a11a-7e1e03d7dc7e node DatanodeRegistration(127.0.0.1:33851, datanodeUuid=45e5e002-240b-4d3b-94c7-cf3a628b8284, infoPort=46127, infoSecurePort=0, ipcPort=40203, storageInfo=lv=-57;cid=testClusterID;nsid=709119220;c=1689898475731), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 00:14:36,459 INFO [Listener at localhost/40203] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38443 2023-07-21 00:14:36,468 WARN [Listener at localhost/35327] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 00:14:36,489 WARN [Listener at localhost/35327] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 00:14:36,492 WARN [Listener at localhost/35327] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 00:14:36,493 INFO [Listener at localhost/35327] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 00:14:36,498 INFO [Listener at localhost/35327] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3eb20d03-240b-b752-ba9a-24012c7c801e/java.io.tmpdir/Jetty_localhost_42875_datanode____.t520ch/webapp 2023-07-21 00:14:36,611 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xfd77fdf68e8ecf1f: Processing first storage report for DS-7fe77ff2-e4ef-4258-9f28-45d404c1d843 from datanode cbed6db4-32a8-4b39-b004-be82e94f4643 2023-07-21 00:14:36,611 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xfd77fdf68e8ecf1f: from storage DS-7fe77ff2-e4ef-4258-9f28-45d404c1d843 node DatanodeRegistration(127.0.0.1:42517, datanodeUuid=cbed6db4-32a8-4b39-b004-be82e94f4643, infoPort=39721, infoSecurePort=0, ipcPort=35327, storageInfo=lv=-57;cid=testClusterID;nsid=709119220;c=1689898475731), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 00:14:36,611 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xfd77fdf68e8ecf1f: Processing first storage report for DS-add529bb-215d-494f-b2e8-4cd021f8d8ee from datanode cbed6db4-32a8-4b39-b004-be82e94f4643 2023-07-21 00:14:36,611 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xfd77fdf68e8ecf1f: from storage DS-add529bb-215d-494f-b2e8-4cd021f8d8ee node DatanodeRegistration(127.0.0.1:42517, datanodeUuid=cbed6db4-32a8-4b39-b004-be82e94f4643, infoPort=39721, infoSecurePort=0, ipcPort=35327, storageInfo=lv=-57;cid=testClusterID;nsid=709119220;c=1689898475731), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 00:14:36,627 INFO [Listener at localhost/35327] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42875 2023-07-21 00:14:36,642 WARN [Listener at localhost/38819] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 00:14:36,775 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xdd7f1d45e4acccb1: Processing first storage report for DS-a1e0c572-a52d-420e-bb8f-26fd12db3a5b from datanode e6e9547b-0cfa-448f-8416-1a8371a86d57 2023-07-21 00:14:36,775 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xdd7f1d45e4acccb1: from storage DS-a1e0c572-a52d-420e-bb8f-26fd12db3a5b node DatanodeRegistration(127.0.0.1:38697, datanodeUuid=e6e9547b-0cfa-448f-8416-1a8371a86d57, infoPort=38891, infoSecurePort=0, ipcPort=38819, storageInfo=lv=-57;cid=testClusterID;nsid=709119220;c=1689898475731), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 00:14:36,775 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xdd7f1d45e4acccb1: Processing first storage report for DS-f0c91473-7654-4e39-b4c9-32d0e061b7bf from datanode e6e9547b-0cfa-448f-8416-1a8371a86d57 2023-07-21 00:14:36,775 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xdd7f1d45e4acccb1: from storage DS-f0c91473-7654-4e39-b4c9-32d0e061b7bf node DatanodeRegistration(127.0.0.1:38697, datanodeUuid=e6e9547b-0cfa-448f-8416-1a8371a86d57, infoPort=38891, infoSecurePort=0, ipcPort=38819, storageInfo=lv=-57;cid=testClusterID;nsid=709119220;c=1689898475731), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 00:14:36,871 DEBUG [Listener at localhost/38819] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3eb20d03-240b-b752-ba9a-24012c7c801e 2023-07-21 00:14:36,874 INFO [Listener at localhost/38819] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3eb20d03-240b-b752-ba9a-24012c7c801e/cluster_7e7efc1e-9bda-4757-7960-9be15d49d74a/zookeeper_0, clientPort=63294, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3eb20d03-240b-b752-ba9a-24012c7c801e/cluster_7e7efc1e-9bda-4757-7960-9be15d49d74a/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3eb20d03-240b-b752-ba9a-24012c7c801e/cluster_7e7efc1e-9bda-4757-7960-9be15d49d74a/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-21 00:14:36,876 INFO [Listener at localhost/38819] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=63294 2023-07-21 00:14:36,876 INFO [Listener at localhost/38819] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 00:14:36,877 INFO [Listener at localhost/38819] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 00:14:36,898 INFO [Listener at localhost/38819] util.FSUtils(471): Created version file at hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c with version=8 2023-07-21 00:14:36,898 INFO [Listener at localhost/38819] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/hbase-staging 2023-07-21 00:14:36,899 DEBUG [Listener at localhost/38819] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-21 00:14:36,899 DEBUG [Listener at localhost/38819] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-21 00:14:36,899 DEBUG [Listener at localhost/38819] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-21 00:14:36,899 DEBUG [Listener at localhost/38819] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-21 00:14:36,900 INFO [Listener at localhost/38819] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-21 00:14:36,900 INFO [Listener at localhost/38819] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 00:14:36,900 INFO [Listener at localhost/38819] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 00:14:36,900 INFO [Listener at localhost/38819] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 00:14:36,900 INFO [Listener at localhost/38819] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 00:14:36,901 INFO [Listener at localhost/38819] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 00:14:36,901 INFO [Listener at localhost/38819] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 00:14:36,902 INFO [Listener at localhost/38819] ipc.NettyRpcServer(120): Bind to /172.31.14.131:36667 2023-07-21 00:14:36,903 INFO [Listener at localhost/38819] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 00:14:36,904 INFO [Listener at localhost/38819] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 00:14:36,905 INFO [Listener at localhost/38819] zookeeper.RecoverableZooKeeper(93): Process identifier=master:36667 connecting to ZooKeeper ensemble=127.0.0.1:63294 2023-07-21 00:14:36,916 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): master:366670x0, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 00:14:36,920 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:36667-0x101853ae84c0000 connected 2023-07-21 00:14:36,941 DEBUG [Listener at localhost/38819] zookeeper.ZKUtil(164): master:36667-0x101853ae84c0000, quorum=127.0.0.1:63294, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 00:14:36,945 DEBUG [Listener at localhost/38819] zookeeper.ZKUtil(164): master:36667-0x101853ae84c0000, quorum=127.0.0.1:63294, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 00:14:36,945 DEBUG [Listener at localhost/38819] zookeeper.ZKUtil(164): master:36667-0x101853ae84c0000, quorum=127.0.0.1:63294, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 00:14:36,948 DEBUG [Listener at localhost/38819] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36667 2023-07-21 00:14:36,949 DEBUG [Listener at localhost/38819] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36667 2023-07-21 00:14:36,954 DEBUG [Listener at localhost/38819] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36667 2023-07-21 00:14:36,954 DEBUG [Listener at localhost/38819] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36667 2023-07-21 00:14:36,955 DEBUG [Listener at localhost/38819] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36667 2023-07-21 00:14:36,957 INFO [Listener at localhost/38819] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 00:14:36,957 INFO [Listener at localhost/38819] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 00:14:36,957 INFO [Listener at localhost/38819] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 00:14:36,958 INFO [Listener at localhost/38819] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-21 00:14:36,958 INFO [Listener at localhost/38819] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 00:14:36,958 INFO [Listener at localhost/38819] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 00:14:36,958 INFO [Listener at localhost/38819] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 00:14:36,958 INFO [Listener at localhost/38819] http.HttpServer(1146): Jetty bound to port 45639 2023-07-21 00:14:36,959 INFO [Listener at localhost/38819] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 00:14:36,963 INFO [Listener at localhost/38819] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 00:14:36,964 INFO [Listener at localhost/38819] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1890f436{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3eb20d03-240b-b752-ba9a-24012c7c801e/hadoop.log.dir/,AVAILABLE} 2023-07-21 00:14:36,964 INFO [Listener at localhost/38819] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 00:14:36,964 INFO [Listener at localhost/38819] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@551ae954{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 00:14:37,083 INFO [Listener at localhost/38819] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 00:14:37,085 INFO [Listener at localhost/38819] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 00:14:37,085 INFO [Listener at localhost/38819] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 00:14:37,085 INFO [Listener at localhost/38819] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 00:14:37,087 INFO [Listener at localhost/38819] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 00:14:37,089 INFO [Listener at localhost/38819] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@28d2768{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3eb20d03-240b-b752-ba9a-24012c7c801e/java.io.tmpdir/jetty-0_0_0_0-45639-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3325469201129439216/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-21 00:14:37,090 INFO [Listener at localhost/38819] server.AbstractConnector(333): Started ServerConnector@ed4524b{HTTP/1.1, (http/1.1)}{0.0.0.0:45639} 2023-07-21 00:14:37,091 INFO [Listener at localhost/38819] server.Server(415): Started @34902ms 2023-07-21 00:14:37,091 INFO [Listener at localhost/38819] master.HMaster(444): hbase.rootdir=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c, hbase.cluster.distributed=false 2023-07-21 00:14:37,123 INFO [Listener at localhost/38819] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-21 00:14:37,124 INFO [Listener at localhost/38819] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 00:14:37,124 INFO [Listener at localhost/38819] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 00:14:37,124 INFO [Listener at localhost/38819] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 00:14:37,124 INFO [Listener at localhost/38819] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 00:14:37,124 INFO [Listener at localhost/38819] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 00:14:37,124 INFO [Listener at localhost/38819] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 00:14:37,126 INFO [Listener at localhost/38819] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37637 2023-07-21 00:14:37,126 INFO [Listener at localhost/38819] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 00:14:37,128 DEBUG [Listener at localhost/38819] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 00:14:37,129 INFO [Listener at localhost/38819] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 00:14:37,130 INFO [Listener at localhost/38819] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 00:14:37,131 INFO [Listener at localhost/38819] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37637 connecting to ZooKeeper ensemble=127.0.0.1:63294 2023-07-21 00:14:37,137 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): regionserver:376370x0, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 00:14:37,138 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37637-0x101853ae84c0001 connected 2023-07-21 00:14:37,138 DEBUG [Listener at localhost/38819] zookeeper.ZKUtil(164): regionserver:37637-0x101853ae84c0001, quorum=127.0.0.1:63294, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 00:14:37,139 DEBUG [Listener at localhost/38819] zookeeper.ZKUtil(164): regionserver:37637-0x101853ae84c0001, quorum=127.0.0.1:63294, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 00:14:37,139 DEBUG [Listener at localhost/38819] zookeeper.ZKUtil(164): regionserver:37637-0x101853ae84c0001, quorum=127.0.0.1:63294, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 00:14:37,141 DEBUG [Listener at localhost/38819] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37637 2023-07-21 00:14:37,141 DEBUG [Listener at localhost/38819] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37637 2023-07-21 00:14:37,142 DEBUG [Listener at localhost/38819] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37637 2023-07-21 00:14:37,150 DEBUG [Listener at localhost/38819] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37637 2023-07-21 00:14:37,150 DEBUG [Listener at localhost/38819] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37637 2023-07-21 00:14:37,154 INFO [Listener at localhost/38819] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 00:14:37,154 INFO [Listener at localhost/38819] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 00:14:37,154 INFO [Listener at localhost/38819] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 00:14:37,155 INFO [Listener at localhost/38819] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 00:14:37,155 INFO [Listener at localhost/38819] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 00:14:37,155 INFO [Listener at localhost/38819] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 00:14:37,155 INFO [Listener at localhost/38819] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 00:14:37,157 INFO [Listener at localhost/38819] http.HttpServer(1146): Jetty bound to port 32793 2023-07-21 00:14:37,157 INFO [Listener at localhost/38819] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 00:14:37,179 INFO [Listener at localhost/38819] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 00:14:37,179 INFO [Listener at localhost/38819] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3dd08807{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3eb20d03-240b-b752-ba9a-24012c7c801e/hadoop.log.dir/,AVAILABLE} 2023-07-21 00:14:37,180 INFO [Listener at localhost/38819] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 00:14:37,180 INFO [Listener at localhost/38819] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@76bebe84{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 00:14:37,311 INFO [Listener at localhost/38819] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 00:14:37,312 INFO [Listener at localhost/38819] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 00:14:37,312 INFO [Listener at localhost/38819] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 00:14:37,312 INFO [Listener at localhost/38819] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 00:14:37,314 INFO [Listener at localhost/38819] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 00:14:37,315 INFO [Listener at localhost/38819] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@172af1a9{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3eb20d03-240b-b752-ba9a-24012c7c801e/java.io.tmpdir/jetty-0_0_0_0-32793-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4146766187011970021/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 00:14:37,316 INFO [Listener at localhost/38819] server.AbstractConnector(333): Started ServerConnector@39caab19{HTTP/1.1, (http/1.1)}{0.0.0.0:32793} 2023-07-21 00:14:37,317 INFO [Listener at localhost/38819] server.Server(415): Started @35128ms 2023-07-21 00:14:37,337 INFO [Listener at localhost/38819] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-21 00:14:37,337 INFO [Listener at localhost/38819] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 00:14:37,338 INFO [Listener at localhost/38819] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 00:14:37,338 INFO [Listener at localhost/38819] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 00:14:37,338 INFO [Listener at localhost/38819] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 00:14:37,338 INFO [Listener at localhost/38819] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 00:14:37,339 INFO [Listener at localhost/38819] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 00:14:37,342 INFO [Listener at localhost/38819] ipc.NettyRpcServer(120): Bind to /172.31.14.131:38311 2023-07-21 00:14:37,343 INFO [Listener at localhost/38819] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 00:14:37,344 DEBUG [Listener at localhost/38819] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 00:14:37,344 INFO [Listener at localhost/38819] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 00:14:37,346 INFO [Listener at localhost/38819] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 00:14:37,347 INFO [Listener at localhost/38819] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38311 connecting to ZooKeeper ensemble=127.0.0.1:63294 2023-07-21 00:14:37,350 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): regionserver:383110x0, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 00:14:37,351 DEBUG [Listener at localhost/38819] zookeeper.ZKUtil(164): regionserver:383110x0, quorum=127.0.0.1:63294, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 00:14:37,351 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38311-0x101853ae84c0002 connected 2023-07-21 00:14:37,352 DEBUG [Listener at localhost/38819] zookeeper.ZKUtil(164): regionserver:38311-0x101853ae84c0002, quorum=127.0.0.1:63294, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 00:14:37,352 DEBUG [Listener at localhost/38819] zookeeper.ZKUtil(164): regionserver:38311-0x101853ae84c0002, quorum=127.0.0.1:63294, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 00:14:37,356 DEBUG [Listener at localhost/38819] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38311 2023-07-21 00:14:37,356 DEBUG [Listener at localhost/38819] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38311 2023-07-21 00:14:37,356 DEBUG [Listener at localhost/38819] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38311 2023-07-21 00:14:37,358 DEBUG [Listener at localhost/38819] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38311 2023-07-21 00:14:37,359 DEBUG [Listener at localhost/38819] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38311 2023-07-21 00:14:37,360 INFO [Listener at localhost/38819] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 00:14:37,361 INFO [Listener at localhost/38819] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 00:14:37,361 INFO [Listener at localhost/38819] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 00:14:37,361 INFO [Listener at localhost/38819] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 00:14:37,361 INFO [Listener at localhost/38819] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 00:14:37,361 INFO [Listener at localhost/38819] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 00:14:37,361 INFO [Listener at localhost/38819] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 00:14:37,362 INFO [Listener at localhost/38819] http.HttpServer(1146): Jetty bound to port 43049 2023-07-21 00:14:37,362 INFO [Listener at localhost/38819] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 00:14:37,365 INFO [Listener at localhost/38819] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 00:14:37,365 INFO [Listener at localhost/38819] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@20989925{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3eb20d03-240b-b752-ba9a-24012c7c801e/hadoop.log.dir/,AVAILABLE} 2023-07-21 00:14:37,366 INFO [Listener at localhost/38819] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 00:14:37,366 INFO [Listener at localhost/38819] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@75145d60{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 00:14:37,492 INFO [Listener at localhost/38819] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 00:14:37,493 INFO [Listener at localhost/38819] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 00:14:37,493 INFO [Listener at localhost/38819] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 00:14:37,494 INFO [Listener at localhost/38819] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 00:14:37,495 INFO [Listener at localhost/38819] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 00:14:37,496 INFO [Listener at localhost/38819] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3f6bc563{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3eb20d03-240b-b752-ba9a-24012c7c801e/java.io.tmpdir/jetty-0_0_0_0-43049-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6312637928282901847/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 00:14:37,497 INFO [Listener at localhost/38819] server.AbstractConnector(333): Started ServerConnector@40e93b12{HTTP/1.1, (http/1.1)}{0.0.0.0:43049} 2023-07-21 00:14:37,498 INFO [Listener at localhost/38819] server.Server(415): Started @35309ms 2023-07-21 00:14:37,515 INFO [Listener at localhost/38819] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-21 00:14:37,516 INFO [Listener at localhost/38819] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 00:14:37,516 INFO [Listener at localhost/38819] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 00:14:37,516 INFO [Listener at localhost/38819] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 00:14:37,516 INFO [Listener at localhost/38819] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 00:14:37,516 INFO [Listener at localhost/38819] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 00:14:37,516 INFO [Listener at localhost/38819] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 00:14:37,519 INFO [Listener at localhost/38819] ipc.NettyRpcServer(120): Bind to /172.31.14.131:43031 2023-07-21 00:14:37,520 INFO [Listener at localhost/38819] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 00:14:37,523 DEBUG [Listener at localhost/38819] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 00:14:37,524 INFO [Listener at localhost/38819] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 00:14:37,524 INFO [Listener at localhost/38819] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 00:14:37,525 INFO [Listener at localhost/38819] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43031 connecting to ZooKeeper ensemble=127.0.0.1:63294 2023-07-21 00:14:37,529 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): regionserver:430310x0, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 00:14:37,530 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43031-0x101853ae84c0003 connected 2023-07-21 00:14:37,530 DEBUG [Listener at localhost/38819] zookeeper.ZKUtil(164): regionserver:43031-0x101853ae84c0003, quorum=127.0.0.1:63294, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 00:14:37,531 DEBUG [Listener at localhost/38819] zookeeper.ZKUtil(164): regionserver:43031-0x101853ae84c0003, quorum=127.0.0.1:63294, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 00:14:37,531 DEBUG [Listener at localhost/38819] zookeeper.ZKUtil(164): regionserver:43031-0x101853ae84c0003, quorum=127.0.0.1:63294, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 00:14:37,531 DEBUG [Listener at localhost/38819] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43031 2023-07-21 00:14:37,532 DEBUG [Listener at localhost/38819] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43031 2023-07-21 00:14:37,533 DEBUG [Listener at localhost/38819] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43031 2023-07-21 00:14:37,533 DEBUG [Listener at localhost/38819] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43031 2023-07-21 00:14:37,533 DEBUG [Listener at localhost/38819] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43031 2023-07-21 00:14:37,536 INFO [Listener at localhost/38819] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 00:14:37,536 INFO [Listener at localhost/38819] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 00:14:37,536 INFO [Listener at localhost/38819] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 00:14:37,537 INFO [Listener at localhost/38819] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 00:14:37,537 INFO [Listener at localhost/38819] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 00:14:37,537 INFO [Listener at localhost/38819] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 00:14:37,537 INFO [Listener at localhost/38819] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 00:14:37,538 INFO [Listener at localhost/38819] http.HttpServer(1146): Jetty bound to port 37595 2023-07-21 00:14:37,538 INFO [Listener at localhost/38819] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 00:14:37,542 INFO [Listener at localhost/38819] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 00:14:37,542 INFO [Listener at localhost/38819] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@24cfa0e0{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3eb20d03-240b-b752-ba9a-24012c7c801e/hadoop.log.dir/,AVAILABLE} 2023-07-21 00:14:37,543 INFO [Listener at localhost/38819] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 00:14:37,543 INFO [Listener at localhost/38819] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@329690c5{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 00:14:37,658 INFO [Listener at localhost/38819] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 00:14:37,659 INFO [Listener at localhost/38819] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 00:14:37,659 INFO [Listener at localhost/38819] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 00:14:37,659 INFO [Listener at localhost/38819] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 00:14:37,663 INFO [Listener at localhost/38819] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 00:14:37,664 INFO [Listener at localhost/38819] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@156ae7ad{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3eb20d03-240b-b752-ba9a-24012c7c801e/java.io.tmpdir/jetty-0_0_0_0-37595-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5251633307574132476/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 00:14:37,665 INFO [Listener at localhost/38819] server.AbstractConnector(333): Started ServerConnector@5fd1460f{HTTP/1.1, (http/1.1)}{0.0.0.0:37595} 2023-07-21 00:14:37,665 INFO [Listener at localhost/38819] server.Server(415): Started @35477ms 2023-07-21 00:14:37,669 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 00:14:37,673 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@6afc80d0{HTTP/1.1, (http/1.1)}{0.0.0.0:43351} 2023-07-21 00:14:37,673 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @35484ms 2023-07-21 00:14:37,673 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,36667,1689898476899 2023-07-21 00:14:37,676 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): master:36667-0x101853ae84c0000, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-21 00:14:37,676 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:36667-0x101853ae84c0000, quorum=127.0.0.1:63294, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,36667,1689898476899 2023-07-21 00:14:37,678 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): master:36667-0x101853ae84c0000, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 00:14:37,678 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): regionserver:37637-0x101853ae84c0001, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 00:14:37,678 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): regionserver:38311-0x101853ae84c0002, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 00:14:37,678 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): master:36667-0x101853ae84c0000, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 00:14:37,678 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): regionserver:43031-0x101853ae84c0003, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 00:14:37,679 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:36667-0x101853ae84c0000, quorum=127.0.0.1:63294, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 00:14:37,682 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,36667,1689898476899 from backup master directory 2023-07-21 00:14:37,682 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:36667-0x101853ae84c0000, quorum=127.0.0.1:63294, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 00:14:37,683 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): master:36667-0x101853ae84c0000, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,36667,1689898476899 2023-07-21 00:14:37,683 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): master:36667-0x101853ae84c0000, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-21 00:14:37,683 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 00:14:37,683 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,36667,1689898476899 2023-07-21 00:14:37,711 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/hbase.id with ID: 75f7ea11-41e3-4ee8-9940-7180d0899837 2023-07-21 00:14:37,725 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 00:14:37,727 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): master:36667-0x101853ae84c0000, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 00:14:37,741 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x2a1ba7a2 to 127.0.0.1:63294 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 00:14:37,749 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@55e0f690, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 00:14:37,749 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 00:14:37,750 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-21 00:14:37,750 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 00:14:37,751 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/MasterData/data/master/store-tmp 2023-07-21 00:14:37,761 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:37,761 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-21 00:14:37,761 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 00:14:37,761 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 00:14:37,761 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-21 00:14:37,761 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 00:14:37,761 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 00:14:37,761 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 00:14:37,762 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/MasterData/WALs/jenkins-hbase4.apache.org,36667,1689898476899 2023-07-21 00:14:37,764 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36667%2C1689898476899, suffix=, logDir=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/MasterData/WALs/jenkins-hbase4.apache.org,36667,1689898476899, archiveDir=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/MasterData/oldWALs, maxLogs=10 2023-07-21 00:14:37,780 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42517,DS-7fe77ff2-e4ef-4258-9f28-45d404c1d843,DISK] 2023-07-21 00:14:37,780 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38697,DS-a1e0c572-a52d-420e-bb8f-26fd12db3a5b,DISK] 2023-07-21 00:14:37,780 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33851,DS-645ae859-8a7f-4034-aa18-66fc096b91fa,DISK] 2023-07-21 00:14:37,783 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/MasterData/WALs/jenkins-hbase4.apache.org,36667,1689898476899/jenkins-hbase4.apache.org%2C36667%2C1689898476899.1689898477764 2023-07-21 00:14:37,783 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38697,DS-a1e0c572-a52d-420e-bb8f-26fd12db3a5b,DISK], DatanodeInfoWithStorage[127.0.0.1:33851,DS-645ae859-8a7f-4034-aa18-66fc096b91fa,DISK], DatanodeInfoWithStorage[127.0.0.1:42517,DS-7fe77ff2-e4ef-4258-9f28-45d404c1d843,DISK]] 2023-07-21 00:14:37,783 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-21 00:14:37,784 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:37,784 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 00:14:37,784 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 00:14:37,786 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-21 00:14:37,787 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-21 00:14:37,788 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-21 00:14:37,788 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:37,789 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-21 00:14:37,790 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-21 00:14:37,792 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 00:14:37,794 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 00:14:37,795 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11140823680, jitterRate=0.037570059299468994}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 00:14:37,795 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 00:14:37,795 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-21 00:14:37,796 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-21 00:14:37,796 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-21 00:14:37,796 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-21 00:14:37,797 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-21 00:14:37,797 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-21 00:14:37,797 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-21 00:14:37,798 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-21 00:14:37,799 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-21 00:14:37,799 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36667-0x101853ae84c0000, quorum=127.0.0.1:63294, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-21 00:14:37,800 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-21 00:14:37,800 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36667-0x101853ae84c0000, quorum=127.0.0.1:63294, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-21 00:14:37,802 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): master:36667-0x101853ae84c0000, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 00:14:37,802 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36667-0x101853ae84c0000, quorum=127.0.0.1:63294, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-21 00:14:37,802 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36667-0x101853ae84c0000, quorum=127.0.0.1:63294, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-21 00:14:37,803 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36667-0x101853ae84c0000, quorum=127.0.0.1:63294, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-21 00:14:37,804 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): regionserver:37637-0x101853ae84c0001, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 00:14:37,804 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): master:36667-0x101853ae84c0000, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 00:14:37,804 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): master:36667-0x101853ae84c0000, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 00:14:37,804 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): regionserver:43031-0x101853ae84c0003, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 00:14:37,805 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): regionserver:38311-0x101853ae84c0002, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 00:14:37,806 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,36667,1689898476899, sessionid=0x101853ae84c0000, setting cluster-up flag (Was=false) 2023-07-21 00:14:37,812 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): master:36667-0x101853ae84c0000, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 00:14:37,815 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-21 00:14:37,816 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,36667,1689898476899 2023-07-21 00:14:37,820 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): master:36667-0x101853ae84c0000, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 00:14:37,825 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-21 00:14:37,826 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,36667,1689898476899 2023-07-21 00:14:37,826 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/.hbase-snapshot/.tmp 2023-07-21 00:14:37,828 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-21 00:14:37,828 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-21 00:14:37,829 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-21 00:14:37,830 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36667,1689898476899] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 00:14:37,830 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-21 00:14:37,831 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-21 00:14:37,832 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-21 00:14:37,845 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-21 00:14:37,845 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-21 00:14:37,845 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-21 00:14:37,845 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-21 00:14:37,845 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-21 00:14:37,845 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-21 00:14:37,845 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-21 00:14:37,845 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-21 00:14:37,845 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-21 00:14:37,845 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:37,845 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-21 00:14:37,845 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:37,848 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689898507848 2023-07-21 00:14:37,849 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-21 00:14:37,849 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-21 00:14:37,849 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-21 00:14:37,849 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-21 00:14:37,849 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-21 00:14:37,849 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-21 00:14:37,849 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:37,849 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-21 00:14:37,849 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-21 00:14:37,851 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-21 00:14:37,851 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-21 00:14:37,851 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-21 00:14:37,851 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-21 00:14:37,852 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-21 00:14:37,853 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-21 00:14:37,855 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689898477853,5,FailOnTimeoutGroup] 2023-07-21 00:14:37,857 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689898477857,5,FailOnTimeoutGroup] 2023-07-21 00:14:37,857 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:37,857 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-21 00:14:37,858 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:37,858 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:37,871 INFO [RS:0;jenkins-hbase4:37637] regionserver.HRegionServer(951): ClusterId : 75f7ea11-41e3-4ee8-9940-7180d0899837 2023-07-21 00:14:37,876 DEBUG [RS:0;jenkins-hbase4:37637] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 00:14:37,878 INFO [RS:2;jenkins-hbase4:43031] regionserver.HRegionServer(951): ClusterId : 75f7ea11-41e3-4ee8-9940-7180d0899837 2023-07-21 00:14:37,878 DEBUG [RS:2;jenkins-hbase4:43031] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 00:14:37,878 INFO [RS:1;jenkins-hbase4:38311] regionserver.HRegionServer(951): ClusterId : 75f7ea11-41e3-4ee8-9940-7180d0899837 2023-07-21 00:14:37,879 DEBUG [RS:1;jenkins-hbase4:38311] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 00:14:37,879 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-21 00:14:37,879 DEBUG [RS:0;jenkins-hbase4:37637] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 00:14:37,879 DEBUG [RS:0;jenkins-hbase4:37637] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 00:14:37,879 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-21 00:14:37,879 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c 2023-07-21 00:14:37,884 DEBUG [RS:2;jenkins-hbase4:43031] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 00:14:37,884 DEBUG [RS:2;jenkins-hbase4:43031] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 00:14:37,885 DEBUG [RS:0;jenkins-hbase4:37637] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 00:14:37,886 DEBUG [RS:1;jenkins-hbase4:38311] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 00:14:37,886 DEBUG [RS:1;jenkins-hbase4:38311] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 00:14:37,893 DEBUG [RS:2;jenkins-hbase4:43031] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 00:14:37,894 DEBUG [RS:0;jenkins-hbase4:37637] zookeeper.ReadOnlyZKClient(139): Connect 0x246c343d to 127.0.0.1:63294 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 00:14:37,896 DEBUG [RS:2;jenkins-hbase4:43031] zookeeper.ReadOnlyZKClient(139): Connect 0x72b98561 to 127.0.0.1:63294 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 00:14:37,896 DEBUG [RS:1;jenkins-hbase4:38311] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 00:14:37,899 DEBUG [RS:1;jenkins-hbase4:38311] zookeeper.ReadOnlyZKClient(139): Connect 0x42a5f925 to 127.0.0.1:63294 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 00:14:37,911 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:37,932 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 00:14:37,934 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/meta/1588230740/info 2023-07-21 00:14:37,935 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 00:14:37,935 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:37,936 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 00:14:37,937 DEBUG [RS:1;jenkins-hbase4:38311] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4c620f01, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 00:14:37,937 DEBUG [RS:0;jenkins-hbase4:37637] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2aabdec0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 00:14:37,938 DEBUG [RS:1;jenkins-hbase4:38311] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@d5f053a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-21 00:14:37,938 DEBUG [RS:0;jenkins-hbase4:37637] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6b96d4fb, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-21 00:14:37,938 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/meta/1588230740/rep_barrier 2023-07-21 00:14:37,939 DEBUG [RS:2;jenkins-hbase4:43031] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7841ee11, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 00:14:37,939 DEBUG [RS:2;jenkins-hbase4:43031] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3d437f7d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-21 00:14:37,940 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 00:14:37,941 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:37,941 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 00:14:37,943 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/meta/1588230740/table 2023-07-21 00:14:37,943 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 00:14:37,944 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:37,945 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/meta/1588230740 2023-07-21 00:14:37,945 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/meta/1588230740 2023-07-21 00:14:37,948 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 00:14:37,949 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 00:14:37,951 DEBUG [RS:1;jenkins-hbase4:38311] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:38311 2023-07-21 00:14:37,951 INFO [RS:1;jenkins-hbase4:38311] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 00:14:37,951 INFO [RS:1;jenkins-hbase4:38311] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 00:14:37,951 DEBUG [RS:1;jenkins-hbase4:38311] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 00:14:37,951 INFO [RS:1;jenkins-hbase4:38311] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,36667,1689898476899 with isa=jenkins-hbase4.apache.org/172.31.14.131:38311, startcode=1689898477336 2023-07-21 00:14:37,952 DEBUG [RS:1;jenkins-hbase4:38311] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 00:14:37,952 DEBUG [RS:0;jenkins-hbase4:37637] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:37637 2023-07-21 00:14:37,952 DEBUG [RS:2;jenkins-hbase4:43031] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:43031 2023-07-21 00:14:37,952 INFO [RS:0;jenkins-hbase4:37637] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 00:14:37,952 INFO [RS:0;jenkins-hbase4:37637] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 00:14:37,952 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 00:14:37,952 DEBUG [RS:0;jenkins-hbase4:37637] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 00:14:37,952 INFO [RS:2;jenkins-hbase4:43031] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 00:14:37,952 INFO [RS:2;jenkins-hbase4:43031] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 00:14:37,952 DEBUG [RS:2;jenkins-hbase4:43031] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 00:14:37,953 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11123485920, jitterRate=0.03595535457134247}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 00:14:37,953 INFO [RS:0;jenkins-hbase4:37637] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,36667,1689898476899 with isa=jenkins-hbase4.apache.org/172.31.14.131:37637, startcode=1689898477123 2023-07-21 00:14:37,953 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 00:14:37,953 INFO [RS:2;jenkins-hbase4:43031] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,36667,1689898476899 with isa=jenkins-hbase4.apache.org/172.31.14.131:43031, startcode=1689898477515 2023-07-21 00:14:37,953 DEBUG [RS:0;jenkins-hbase4:37637] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 00:14:37,953 DEBUG [RS:2;jenkins-hbase4:43031] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 00:14:37,953 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 00:14:37,953 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 00:14:37,953 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 00:14:37,954 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 00:14:37,954 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 00:14:37,954 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53137, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 00:14:37,954 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 00:14:37,954 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 00:14:37,956 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36667] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,38311,1689898477336 2023-07-21 00:14:37,956 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36667,1689898476899] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 00:14:37,957 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36667,1689898476899] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-21 00:14:37,957 DEBUG [RS:1;jenkins-hbase4:38311] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c 2023-07-21 00:14:37,957 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45725, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 00:14:37,957 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44415, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 00:14:37,958 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-21 00:14:37,958 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-21 00:14:37,958 DEBUG [RS:1;jenkins-hbase4:38311] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42959 2023-07-21 00:14:37,958 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36667] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,37637,1689898477123 2023-07-21 00:14:37,958 DEBUG [RS:1;jenkins-hbase4:38311] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=45639 2023-07-21 00:14:37,958 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36667,1689898476899] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 00:14:37,958 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-21 00:14:37,958 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36667,1689898476899] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-21 00:14:37,958 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36667] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,43031,1689898477515 2023-07-21 00:14:37,958 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36667,1689898476899] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 00:14:37,958 DEBUG [RS:0;jenkins-hbase4:37637] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c 2023-07-21 00:14:37,958 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36667,1689898476899] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-21 00:14:37,958 DEBUG [RS:0;jenkins-hbase4:37637] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42959 2023-07-21 00:14:37,958 DEBUG [RS:0;jenkins-hbase4:37637] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=45639 2023-07-21 00:14:37,959 DEBUG [RS:2;jenkins-hbase4:43031] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c 2023-07-21 00:14:37,959 DEBUG [RS:2;jenkins-hbase4:43031] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42959 2023-07-21 00:14:37,959 DEBUG [RS:2;jenkins-hbase4:43031] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=45639 2023-07-21 00:14:37,960 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-21 00:14:37,960 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): master:36667-0x101853ae84c0000, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 00:14:37,961 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-21 00:14:37,965 DEBUG [RS:1;jenkins-hbase4:38311] zookeeper.ZKUtil(162): regionserver:38311-0x101853ae84c0002, quorum=127.0.0.1:63294, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38311,1689898477336 2023-07-21 00:14:37,965 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,43031,1689898477515] 2023-07-21 00:14:37,965 WARN [RS:1;jenkins-hbase4:38311] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 00:14:37,965 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,38311,1689898477336] 2023-07-21 00:14:37,965 DEBUG [RS:0;jenkins-hbase4:37637] zookeeper.ZKUtil(162): regionserver:37637-0x101853ae84c0001, quorum=127.0.0.1:63294, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37637,1689898477123 2023-07-21 00:14:37,965 DEBUG [RS:2;jenkins-hbase4:43031] zookeeper.ZKUtil(162): regionserver:43031-0x101853ae84c0003, quorum=127.0.0.1:63294, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43031,1689898477515 2023-07-21 00:14:37,965 INFO [RS:1;jenkins-hbase4:38311] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 00:14:37,965 WARN [RS:2;jenkins-hbase4:43031] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 00:14:37,965 WARN [RS:0;jenkins-hbase4:37637] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 00:14:37,965 INFO [RS:2;jenkins-hbase4:43031] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 00:14:37,965 INFO [RS:0;jenkins-hbase4:37637] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 00:14:37,965 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,37637,1689898477123] 2023-07-21 00:14:37,965 DEBUG [RS:0;jenkins-hbase4:37637] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/WALs/jenkins-hbase4.apache.org,37637,1689898477123 2023-07-21 00:14:37,965 DEBUG [RS:2;jenkins-hbase4:43031] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/WALs/jenkins-hbase4.apache.org,43031,1689898477515 2023-07-21 00:14:37,965 DEBUG [RS:1;jenkins-hbase4:38311] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/WALs/jenkins-hbase4.apache.org,38311,1689898477336 2023-07-21 00:14:37,972 DEBUG [RS:2;jenkins-hbase4:43031] zookeeper.ZKUtil(162): regionserver:43031-0x101853ae84c0003, quorum=127.0.0.1:63294, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38311,1689898477336 2023-07-21 00:14:37,972 DEBUG [RS:0;jenkins-hbase4:37637] zookeeper.ZKUtil(162): regionserver:37637-0x101853ae84c0001, quorum=127.0.0.1:63294, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38311,1689898477336 2023-07-21 00:14:37,972 DEBUG [RS:1;jenkins-hbase4:38311] zookeeper.ZKUtil(162): regionserver:38311-0x101853ae84c0002, quorum=127.0.0.1:63294, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38311,1689898477336 2023-07-21 00:14:37,972 DEBUG [RS:2;jenkins-hbase4:43031] zookeeper.ZKUtil(162): regionserver:43031-0x101853ae84c0003, quorum=127.0.0.1:63294, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37637,1689898477123 2023-07-21 00:14:37,972 DEBUG [RS:0;jenkins-hbase4:37637] zookeeper.ZKUtil(162): regionserver:37637-0x101853ae84c0001, quorum=127.0.0.1:63294, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37637,1689898477123 2023-07-21 00:14:37,972 DEBUG [RS:1;jenkins-hbase4:38311] zookeeper.ZKUtil(162): regionserver:38311-0x101853ae84c0002, quorum=127.0.0.1:63294, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37637,1689898477123 2023-07-21 00:14:37,972 DEBUG [RS:2;jenkins-hbase4:43031] zookeeper.ZKUtil(162): regionserver:43031-0x101853ae84c0003, quorum=127.0.0.1:63294, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43031,1689898477515 2023-07-21 00:14:37,972 DEBUG [RS:0;jenkins-hbase4:37637] zookeeper.ZKUtil(162): regionserver:37637-0x101853ae84c0001, quorum=127.0.0.1:63294, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43031,1689898477515 2023-07-21 00:14:37,973 DEBUG [RS:1;jenkins-hbase4:38311] zookeeper.ZKUtil(162): regionserver:38311-0x101853ae84c0002, quorum=127.0.0.1:63294, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43031,1689898477515 2023-07-21 00:14:37,973 DEBUG [RS:2;jenkins-hbase4:43031] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 00:14:37,973 DEBUG [RS:0;jenkins-hbase4:37637] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 00:14:37,973 DEBUG [RS:1;jenkins-hbase4:38311] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 00:14:37,974 INFO [RS:2;jenkins-hbase4:43031] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 00:14:37,974 INFO [RS:0;jenkins-hbase4:37637] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 00:14:37,974 INFO [RS:1;jenkins-hbase4:38311] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 00:14:37,975 INFO [RS:2;jenkins-hbase4:43031] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 00:14:37,976 INFO [RS:2;jenkins-hbase4:43031] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 00:14:37,976 INFO [RS:1;jenkins-hbase4:38311] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 00:14:37,976 INFO [RS:2;jenkins-hbase4:43031] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:37,976 INFO [RS:1;jenkins-hbase4:38311] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 00:14:37,976 INFO [RS:2;jenkins-hbase4:43031] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 00:14:37,977 INFO [RS:1;jenkins-hbase4:38311] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:37,977 INFO [RS:1;jenkins-hbase4:38311] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 00:14:37,978 INFO [RS:0;jenkins-hbase4:37637] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 00:14:37,978 INFO [RS:0;jenkins-hbase4:37637] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 00:14:37,979 INFO [RS:0;jenkins-hbase4:37637] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:37,979 INFO [RS:2;jenkins-hbase4:43031] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:37,979 INFO [RS:1;jenkins-hbase4:38311] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:37,979 INFO [RS:0;jenkins-hbase4:37637] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 00:14:37,983 DEBUG [RS:2;jenkins-hbase4:43031] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:37,983 DEBUG [RS:1;jenkins-hbase4:38311] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:37,983 DEBUG [RS:2;jenkins-hbase4:43031] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:37,983 DEBUG [RS:1;jenkins-hbase4:38311] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:37,983 DEBUG [RS:2;jenkins-hbase4:43031] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:37,983 DEBUG [RS:1;jenkins-hbase4:38311] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:37,983 DEBUG [RS:2;jenkins-hbase4:43031] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:37,983 DEBUG [RS:1;jenkins-hbase4:38311] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:37,983 DEBUG [RS:2;jenkins-hbase4:43031] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:37,983 DEBUG [RS:1;jenkins-hbase4:38311] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:37,983 DEBUG [RS:2;jenkins-hbase4:43031] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-21 00:14:37,983 DEBUG [RS:1;jenkins-hbase4:38311] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-21 00:14:37,984 DEBUG [RS:2;jenkins-hbase4:43031] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:37,984 DEBUG [RS:1;jenkins-hbase4:38311] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:37,984 DEBUG [RS:2;jenkins-hbase4:43031] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:37,984 DEBUG [RS:1;jenkins-hbase4:38311] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:37,984 DEBUG [RS:2;jenkins-hbase4:43031] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:37,984 INFO [RS:0;jenkins-hbase4:37637] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:37,984 DEBUG [RS:2;jenkins-hbase4:43031] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:37,984 DEBUG [RS:1;jenkins-hbase4:38311] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:37,984 DEBUG [RS:0;jenkins-hbase4:37637] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:37,985 DEBUG [RS:1;jenkins-hbase4:38311] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:37,985 DEBUG [RS:0;jenkins-hbase4:37637] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:37,985 DEBUG [RS:0;jenkins-hbase4:37637] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:37,985 DEBUG [RS:0;jenkins-hbase4:37637] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:37,985 DEBUG [RS:0;jenkins-hbase4:37637] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:37,985 DEBUG [RS:0;jenkins-hbase4:37637] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-21 00:14:37,985 DEBUG [RS:0;jenkins-hbase4:37637] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:37,985 DEBUG [RS:0;jenkins-hbase4:37637] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:37,985 DEBUG [RS:0;jenkins-hbase4:37637] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:37,985 DEBUG [RS:0;jenkins-hbase4:37637] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:37,986 INFO [RS:1;jenkins-hbase4:38311] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:37,986 INFO [RS:1;jenkins-hbase4:38311] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:37,986 INFO [RS:1;jenkins-hbase4:38311] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:37,986 INFO [RS:1;jenkins-hbase4:38311] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:37,987 INFO [RS:2;jenkins-hbase4:43031] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:37,987 INFO [RS:2;jenkins-hbase4:43031] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:37,987 INFO [RS:2;jenkins-hbase4:43031] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:37,988 INFO [RS:2;jenkins-hbase4:43031] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:37,992 INFO [RS:0;jenkins-hbase4:37637] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:37,992 INFO [RS:0;jenkins-hbase4:37637] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:37,992 INFO [RS:0;jenkins-hbase4:37637] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:37,992 INFO [RS:0;jenkins-hbase4:37637] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:38,004 INFO [RS:1;jenkins-hbase4:38311] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 00:14:38,004 INFO [RS:1;jenkins-hbase4:38311] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38311,1689898477336-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:38,004 INFO [RS:2;jenkins-hbase4:43031] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 00:14:38,005 INFO [RS:2;jenkins-hbase4:43031] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43031,1689898477515-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:38,008 INFO [RS:0;jenkins-hbase4:37637] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 00:14:38,008 INFO [RS:0;jenkins-hbase4:37637] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37637,1689898477123-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:38,019 INFO [RS:2;jenkins-hbase4:43031] regionserver.Replication(203): jenkins-hbase4.apache.org,43031,1689898477515 started 2023-07-21 00:14:38,019 INFO [RS:2;jenkins-hbase4:43031] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,43031,1689898477515, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:43031, sessionid=0x101853ae84c0003 2023-07-21 00:14:38,019 INFO [RS:0;jenkins-hbase4:37637] regionserver.Replication(203): jenkins-hbase4.apache.org,37637,1689898477123 started 2023-07-21 00:14:38,019 DEBUG [RS:2;jenkins-hbase4:43031] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 00:14:38,019 DEBUG [RS:2;jenkins-hbase4:43031] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,43031,1689898477515 2023-07-21 00:14:38,019 DEBUG [RS:2;jenkins-hbase4:43031] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43031,1689898477515' 2023-07-21 00:14:38,019 DEBUG [RS:2;jenkins-hbase4:43031] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 00:14:38,019 INFO [RS:0;jenkins-hbase4:37637] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,37637,1689898477123, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:37637, sessionid=0x101853ae84c0001 2023-07-21 00:14:38,019 DEBUG [RS:0;jenkins-hbase4:37637] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 00:14:38,019 DEBUG [RS:0;jenkins-hbase4:37637] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,37637,1689898477123 2023-07-21 00:14:38,019 DEBUG [RS:0;jenkins-hbase4:37637] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37637,1689898477123' 2023-07-21 00:14:38,019 DEBUG [RS:0;jenkins-hbase4:37637] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 00:14:38,019 DEBUG [RS:2;jenkins-hbase4:43031] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 00:14:38,020 DEBUG [RS:0;jenkins-hbase4:37637] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 00:14:38,020 DEBUG [RS:2;jenkins-hbase4:43031] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 00:14:38,020 DEBUG [RS:2;jenkins-hbase4:43031] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 00:14:38,020 DEBUG [RS:2;jenkins-hbase4:43031] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,43031,1689898477515 2023-07-21 00:14:38,020 DEBUG [RS:0;jenkins-hbase4:37637] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 00:14:38,020 DEBUG [RS:0;jenkins-hbase4:37637] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 00:14:38,020 DEBUG [RS:0;jenkins-hbase4:37637] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,37637,1689898477123 2023-07-21 00:14:38,020 DEBUG [RS:0;jenkins-hbase4:37637] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37637,1689898477123' 2023-07-21 00:14:38,020 DEBUG [RS:0;jenkins-hbase4:37637] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 00:14:38,020 DEBUG [RS:2;jenkins-hbase4:43031] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43031,1689898477515' 2023-07-21 00:14:38,020 DEBUG [RS:2;jenkins-hbase4:43031] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 00:14:38,020 INFO [RS:1;jenkins-hbase4:38311] regionserver.Replication(203): jenkins-hbase4.apache.org,38311,1689898477336 started 2023-07-21 00:14:38,020 INFO [RS:1;jenkins-hbase4:38311] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,38311,1689898477336, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:38311, sessionid=0x101853ae84c0002 2023-07-21 00:14:38,020 DEBUG [RS:2;jenkins-hbase4:43031] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 00:14:38,020 DEBUG [RS:1;jenkins-hbase4:38311] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 00:14:38,021 DEBUG [RS:1;jenkins-hbase4:38311] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,38311,1689898477336 2023-07-21 00:14:38,021 DEBUG [RS:1;jenkins-hbase4:38311] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38311,1689898477336' 2023-07-21 00:14:38,021 DEBUG [RS:1;jenkins-hbase4:38311] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 00:14:38,020 DEBUG [RS:0;jenkins-hbase4:37637] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 00:14:38,021 DEBUG [RS:2;jenkins-hbase4:43031] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 00:14:38,021 INFO [RS:2;jenkins-hbase4:43031] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-21 00:14:38,021 DEBUG [RS:1;jenkins-hbase4:38311] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 00:14:38,021 DEBUG [RS:0;jenkins-hbase4:37637] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 00:14:38,021 INFO [RS:0;jenkins-hbase4:37637] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-21 00:14:38,021 DEBUG [RS:1;jenkins-hbase4:38311] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 00:14:38,021 DEBUG [RS:1;jenkins-hbase4:38311] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 00:14:38,021 DEBUG [RS:1;jenkins-hbase4:38311] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,38311,1689898477336 2023-07-21 00:14:38,021 DEBUG [RS:1;jenkins-hbase4:38311] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38311,1689898477336' 2023-07-21 00:14:38,021 DEBUG [RS:1;jenkins-hbase4:38311] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 00:14:38,022 DEBUG [RS:1;jenkins-hbase4:38311] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 00:14:38,022 DEBUG [RS:1;jenkins-hbase4:38311] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 00:14:38,022 INFO [RS:1;jenkins-hbase4:38311] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-21 00:14:38,023 INFO [RS:1;jenkins-hbase4:38311] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:38,023 INFO [RS:2;jenkins-hbase4:43031] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:38,023 INFO [RS:0;jenkins-hbase4:37637] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:38,024 DEBUG [RS:2;jenkins-hbase4:43031] zookeeper.ZKUtil(398): regionserver:43031-0x101853ae84c0003, quorum=127.0.0.1:63294, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-21 00:14:38,024 DEBUG [RS:1;jenkins-hbase4:38311] zookeeper.ZKUtil(398): regionserver:38311-0x101853ae84c0002, quorum=127.0.0.1:63294, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-21 00:14:38,024 INFO [RS:2;jenkins-hbase4:43031] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-21 00:14:38,024 INFO [RS:1;jenkins-hbase4:38311] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-21 00:14:38,024 INFO [RS:1;jenkins-hbase4:38311] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:38,024 INFO [RS:2;jenkins-hbase4:43031] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:38,024 INFO [RS:2;jenkins-hbase4:43031] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:38,024 INFO [RS:1;jenkins-hbase4:38311] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:38,026 DEBUG [RS:0;jenkins-hbase4:37637] zookeeper.ZKUtil(398): regionserver:37637-0x101853ae84c0001, quorum=127.0.0.1:63294, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-21 00:14:38,026 INFO [RS:0;jenkins-hbase4:37637] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-21 00:14:38,026 INFO [RS:0;jenkins-hbase4:37637] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:38,026 INFO [RS:0;jenkins-hbase4:37637] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:38,112 DEBUG [jenkins-hbase4:36667] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-21 00:14:38,112 DEBUG [jenkins-hbase4:36667] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 00:14:38,112 DEBUG [jenkins-hbase4:36667] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 00:14:38,112 DEBUG [jenkins-hbase4:36667] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 00:14:38,112 DEBUG [jenkins-hbase4:36667] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 00:14:38,112 DEBUG [jenkins-hbase4:36667] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 00:14:38,114 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,43031,1689898477515, state=OPENING 2023-07-21 00:14:38,115 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-21 00:14:38,116 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): master:36667-0x101853ae84c0000, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 00:14:38,117 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 00:14:38,117 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,43031,1689898477515}] 2023-07-21 00:14:38,128 INFO [RS:1;jenkins-hbase4:38311] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38311%2C1689898477336, suffix=, logDir=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/WALs/jenkins-hbase4.apache.org,38311,1689898477336, archiveDir=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/oldWALs, maxLogs=32 2023-07-21 00:14:38,128 INFO [RS:0;jenkins-hbase4:37637] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37637%2C1689898477123, suffix=, logDir=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/WALs/jenkins-hbase4.apache.org,37637,1689898477123, archiveDir=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/oldWALs, maxLogs=32 2023-07-21 00:14:38,128 INFO [RS:2;jenkins-hbase4:43031] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43031%2C1689898477515, suffix=, logDir=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/WALs/jenkins-hbase4.apache.org,43031,1689898477515, archiveDir=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/oldWALs, maxLogs=32 2023-07-21 00:14:38,139 WARN [ReadOnlyZKClient-127.0.0.1:63294@0x2a1ba7a2] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-21 00:14:38,139 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36667,1689898476899] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 00:14:38,140 INFO [RS-EventLoopGroup-11-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54660, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 00:14:38,141 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43031] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:54660 deadline: 1689898538141, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,43031,1689898477515 2023-07-21 00:14:38,149 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42517,DS-7fe77ff2-e4ef-4258-9f28-45d404c1d843,DISK] 2023-07-21 00:14:38,149 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33851,DS-645ae859-8a7f-4034-aa18-66fc096b91fa,DISK] 2023-07-21 00:14:38,149 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38697,DS-a1e0c572-a52d-420e-bb8f-26fd12db3a5b,DISK] 2023-07-21 00:14:38,156 INFO [RS:1;jenkins-hbase4:38311] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/WALs/jenkins-hbase4.apache.org,38311,1689898477336/jenkins-hbase4.apache.org%2C38311%2C1689898477336.1689898478132 2023-07-21 00:14:38,157 DEBUG [RS:1;jenkins-hbase4:38311] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33851,DS-645ae859-8a7f-4034-aa18-66fc096b91fa,DISK], DatanodeInfoWithStorage[127.0.0.1:42517,DS-7fe77ff2-e4ef-4258-9f28-45d404c1d843,DISK], DatanodeInfoWithStorage[127.0.0.1:38697,DS-a1e0c572-a52d-420e-bb8f-26fd12db3a5b,DISK]] 2023-07-21 00:14:38,164 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42517,DS-7fe77ff2-e4ef-4258-9f28-45d404c1d843,DISK] 2023-07-21 00:14:38,165 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33851,DS-645ae859-8a7f-4034-aa18-66fc096b91fa,DISK] 2023-07-21 00:14:38,164 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38697,DS-a1e0c572-a52d-420e-bb8f-26fd12db3a5b,DISK] 2023-07-21 00:14:38,170 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42517,DS-7fe77ff2-e4ef-4258-9f28-45d404c1d843,DISK] 2023-07-21 00:14:38,171 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33851,DS-645ae859-8a7f-4034-aa18-66fc096b91fa,DISK] 2023-07-21 00:14:38,171 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38697,DS-a1e0c572-a52d-420e-bb8f-26fd12db3a5b,DISK] 2023-07-21 00:14:38,173 INFO [RS:0;jenkins-hbase4:37637] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/WALs/jenkins-hbase4.apache.org,37637,1689898477123/jenkins-hbase4.apache.org%2C37637%2C1689898477123.1689898478137 2023-07-21 00:14:38,174 DEBUG [RS:0;jenkins-hbase4:37637] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42517,DS-7fe77ff2-e4ef-4258-9f28-45d404c1d843,DISK], DatanodeInfoWithStorage[127.0.0.1:38697,DS-a1e0c572-a52d-420e-bb8f-26fd12db3a5b,DISK], DatanodeInfoWithStorage[127.0.0.1:33851,DS-645ae859-8a7f-4034-aa18-66fc096b91fa,DISK]] 2023-07-21 00:14:38,174 INFO [RS:2;jenkins-hbase4:43031] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/WALs/jenkins-hbase4.apache.org,43031,1689898477515/jenkins-hbase4.apache.org%2C43031%2C1689898477515.1689898478137 2023-07-21 00:14:38,175 DEBUG [RS:2;jenkins-hbase4:43031] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42517,DS-7fe77ff2-e4ef-4258-9f28-45d404c1d843,DISK], DatanodeInfoWithStorage[127.0.0.1:33851,DS-645ae859-8a7f-4034-aa18-66fc096b91fa,DISK], DatanodeInfoWithStorage[127.0.0.1:38697,DS-a1e0c572-a52d-420e-bb8f-26fd12db3a5b,DISK]] 2023-07-21 00:14:38,271 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,43031,1689898477515 2023-07-21 00:14:38,273 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 00:14:38,275 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54674, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 00:14:38,279 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-21 00:14:38,279 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 00:14:38,281 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43031%2C1689898477515.meta, suffix=.meta, logDir=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/WALs/jenkins-hbase4.apache.org,43031,1689898477515, archiveDir=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/oldWALs, maxLogs=32 2023-07-21 00:14:38,297 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42517,DS-7fe77ff2-e4ef-4258-9f28-45d404c1d843,DISK] 2023-07-21 00:14:38,299 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38697,DS-a1e0c572-a52d-420e-bb8f-26fd12db3a5b,DISK] 2023-07-21 00:14:38,299 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33851,DS-645ae859-8a7f-4034-aa18-66fc096b91fa,DISK] 2023-07-21 00:14:38,301 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/WALs/jenkins-hbase4.apache.org,43031,1689898477515/jenkins-hbase4.apache.org%2C43031%2C1689898477515.meta.1689898478282.meta 2023-07-21 00:14:38,301 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42517,DS-7fe77ff2-e4ef-4258-9f28-45d404c1d843,DISK], DatanodeInfoWithStorage[127.0.0.1:38697,DS-a1e0c572-a52d-420e-bb8f-26fd12db3a5b,DISK], DatanodeInfoWithStorage[127.0.0.1:33851,DS-645ae859-8a7f-4034-aa18-66fc096b91fa,DISK]] 2023-07-21 00:14:38,302 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-21 00:14:38,302 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 00:14:38,302 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-21 00:14:38,302 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-21 00:14:38,302 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-21 00:14:38,302 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:38,302 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-21 00:14:38,302 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-21 00:14:38,304 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 00:14:38,305 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/meta/1588230740/info 2023-07-21 00:14:38,305 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/meta/1588230740/info 2023-07-21 00:14:38,305 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 00:14:38,306 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:38,306 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 00:14:38,307 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/meta/1588230740/rep_barrier 2023-07-21 00:14:38,307 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/meta/1588230740/rep_barrier 2023-07-21 00:14:38,307 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 00:14:38,308 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:38,308 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 00:14:38,309 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/meta/1588230740/table 2023-07-21 00:14:38,309 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/meta/1588230740/table 2023-07-21 00:14:38,309 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 00:14:38,309 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:38,310 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/meta/1588230740 2023-07-21 00:14:38,312 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/meta/1588230740 2023-07-21 00:14:38,314 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 00:14:38,316 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 00:14:38,317 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10847736000, jitterRate=0.010274142026901245}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 00:14:38,317 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 00:14:38,318 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689898478271 2023-07-21 00:14:38,324 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-21 00:14:38,325 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-21 00:14:38,326 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,43031,1689898477515, state=OPEN 2023-07-21 00:14:38,327 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): master:36667-0x101853ae84c0000, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 00:14:38,327 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 00:14:38,328 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-21 00:14:38,328 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,43031,1689898477515 in 210 msec 2023-07-21 00:14:38,330 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-21 00:14:38,330 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 370 msec 2023-07-21 00:14:38,332 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 500 msec 2023-07-21 00:14:38,332 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689898478332, completionTime=-1 2023-07-21 00:14:38,332 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-21 00:14:38,332 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-21 00:14:38,337 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-21 00:14:38,337 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689898538337 2023-07-21 00:14:38,337 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689898598337 2023-07-21 00:14:38,337 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 5 msec 2023-07-21 00:14:38,344 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36667,1689898476899-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:38,344 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36667,1689898476899-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:38,344 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36667,1689898476899-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:38,344 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:36667, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:38,344 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:38,344 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-21 00:14:38,345 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-21 00:14:38,346 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-21 00:14:38,346 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-21 00:14:38,347 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 00:14:38,348 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 00:14:38,350 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/.tmp/data/hbase/namespace/8be21d62fa1ab51031e13c905ca09e16 2023-07-21 00:14:38,350 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/.tmp/data/hbase/namespace/8be21d62fa1ab51031e13c905ca09e16 empty. 2023-07-21 00:14:38,351 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/.tmp/data/hbase/namespace/8be21d62fa1ab51031e13c905ca09e16 2023-07-21 00:14:38,351 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-21 00:14:38,364 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-21 00:14:38,365 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 8be21d62fa1ab51031e13c905ca09e16, NAME => 'hbase:namespace,,1689898478344.8be21d62fa1ab51031e13c905ca09e16.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/.tmp 2023-07-21 00:14:38,375 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689898478344.8be21d62fa1ab51031e13c905ca09e16.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:38,375 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 8be21d62fa1ab51031e13c905ca09e16, disabling compactions & flushes 2023-07-21 00:14:38,375 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689898478344.8be21d62fa1ab51031e13c905ca09e16. 2023-07-21 00:14:38,375 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689898478344.8be21d62fa1ab51031e13c905ca09e16. 2023-07-21 00:14:38,375 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689898478344.8be21d62fa1ab51031e13c905ca09e16. after waiting 0 ms 2023-07-21 00:14:38,375 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689898478344.8be21d62fa1ab51031e13c905ca09e16. 2023-07-21 00:14:38,375 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689898478344.8be21d62fa1ab51031e13c905ca09e16. 2023-07-21 00:14:38,375 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 8be21d62fa1ab51031e13c905ca09e16: 2023-07-21 00:14:38,378 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 00:14:38,379 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689898478344.8be21d62fa1ab51031e13c905ca09e16.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689898478378"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898478378"}]},"ts":"1689898478378"} 2023-07-21 00:14:38,381 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 00:14:38,382 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 00:14:38,382 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689898478382"}]},"ts":"1689898478382"} 2023-07-21 00:14:38,383 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-21 00:14:38,386 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 00:14:38,387 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 00:14:38,387 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 00:14:38,387 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 00:14:38,387 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 00:14:38,387 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=8be21d62fa1ab51031e13c905ca09e16, ASSIGN}] 2023-07-21 00:14:38,389 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=8be21d62fa1ab51031e13c905ca09e16, ASSIGN 2023-07-21 00:14:38,390 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=8be21d62fa1ab51031e13c905ca09e16, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43031,1689898477515; forceNewPlan=false, retain=false 2023-07-21 00:14:38,444 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36667,1689898476899] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 00:14:38,446 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36667,1689898476899] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-21 00:14:38,448 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 00:14:38,448 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 00:14:38,450 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/.tmp/data/hbase/rsgroup/6db26a1d63f39d73c9bd0b8b1bdcbe9c 2023-07-21 00:14:38,450 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/.tmp/data/hbase/rsgroup/6db26a1d63f39d73c9bd0b8b1bdcbe9c empty. 2023-07-21 00:14:38,451 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/.tmp/data/hbase/rsgroup/6db26a1d63f39d73c9bd0b8b1bdcbe9c 2023-07-21 00:14:38,451 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-21 00:14:38,463 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-21 00:14:38,464 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 6db26a1d63f39d73c9bd0b8b1bdcbe9c, NAME => 'hbase:rsgroup,,1689898478444.6db26a1d63f39d73c9bd0b8b1bdcbe9c.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/.tmp 2023-07-21 00:14:38,473 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689898478444.6db26a1d63f39d73c9bd0b8b1bdcbe9c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:38,474 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 6db26a1d63f39d73c9bd0b8b1bdcbe9c, disabling compactions & flushes 2023-07-21 00:14:38,474 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689898478444.6db26a1d63f39d73c9bd0b8b1bdcbe9c. 2023-07-21 00:14:38,474 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689898478444.6db26a1d63f39d73c9bd0b8b1bdcbe9c. 2023-07-21 00:14:38,474 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689898478444.6db26a1d63f39d73c9bd0b8b1bdcbe9c. after waiting 0 ms 2023-07-21 00:14:38,474 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689898478444.6db26a1d63f39d73c9bd0b8b1bdcbe9c. 2023-07-21 00:14:38,474 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689898478444.6db26a1d63f39d73c9bd0b8b1bdcbe9c. 2023-07-21 00:14:38,474 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 6db26a1d63f39d73c9bd0b8b1bdcbe9c: 2023-07-21 00:14:38,476 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 00:14:38,477 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689898478444.6db26a1d63f39d73c9bd0b8b1bdcbe9c.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689898478477"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898478477"}]},"ts":"1689898478477"} 2023-07-21 00:14:38,478 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 00:14:38,479 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 00:14:38,479 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689898478479"}]},"ts":"1689898478479"} 2023-07-21 00:14:38,480 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-21 00:14:38,483 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 00:14:38,484 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 00:14:38,484 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 00:14:38,484 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 00:14:38,484 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 00:14:38,484 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=6db26a1d63f39d73c9bd0b8b1bdcbe9c, ASSIGN}] 2023-07-21 00:14:38,487 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=6db26a1d63f39d73c9bd0b8b1bdcbe9c, ASSIGN 2023-07-21 00:14:38,488 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=6db26a1d63f39d73c9bd0b8b1bdcbe9c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37637,1689898477123; forceNewPlan=false, retain=false 2023-07-21 00:14:38,488 INFO [jenkins-hbase4:36667] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-21 00:14:38,489 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=8be21d62fa1ab51031e13c905ca09e16, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43031,1689898477515 2023-07-21 00:14:38,490 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689898478344.8be21d62fa1ab51031e13c905ca09e16.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689898478489"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898478489"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898478489"}]},"ts":"1689898478489"} 2023-07-21 00:14:38,490 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=6db26a1d63f39d73c9bd0b8b1bdcbe9c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37637,1689898477123 2023-07-21 00:14:38,490 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689898478444.6db26a1d63f39d73c9bd0b8b1bdcbe9c.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689898478490"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898478490"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898478490"}]},"ts":"1689898478490"} 2023-07-21 00:14:38,491 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=5, state=RUNNABLE; OpenRegionProcedure 8be21d62fa1ab51031e13c905ca09e16, server=jenkins-hbase4.apache.org,43031,1689898477515}] 2023-07-21 00:14:38,491 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 6db26a1d63f39d73c9bd0b8b1bdcbe9c, server=jenkins-hbase4.apache.org,37637,1689898477123}] 2023-07-21 00:14:38,644 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,37637,1689898477123 2023-07-21 00:14:38,644 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 00:14:38,646 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59760, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 00:14:38,652 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689898478344.8be21d62fa1ab51031e13c905ca09e16. 2023-07-21 00:14:38,652 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8be21d62fa1ab51031e13c905ca09e16, NAME => 'hbase:namespace,,1689898478344.8be21d62fa1ab51031e13c905ca09e16.', STARTKEY => '', ENDKEY => ''} 2023-07-21 00:14:38,652 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 8be21d62fa1ab51031e13c905ca09e16 2023-07-21 00:14:38,652 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689898478344.8be21d62fa1ab51031e13c905ca09e16.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:38,652 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8be21d62fa1ab51031e13c905ca09e16 2023-07-21 00:14:38,652 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8be21d62fa1ab51031e13c905ca09e16 2023-07-21 00:14:38,659 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689898478444.6db26a1d63f39d73c9bd0b8b1bdcbe9c. 2023-07-21 00:14:38,660 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6db26a1d63f39d73c9bd0b8b1bdcbe9c, NAME => 'hbase:rsgroup,,1689898478444.6db26a1d63f39d73c9bd0b8b1bdcbe9c.', STARTKEY => '', ENDKEY => ''} 2023-07-21 00:14:38,660 INFO [StoreOpener-8be21d62fa1ab51031e13c905ca09e16-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 8be21d62fa1ab51031e13c905ca09e16 2023-07-21 00:14:38,660 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 00:14:38,660 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689898478444.6db26a1d63f39d73c9bd0b8b1bdcbe9c. service=MultiRowMutationService 2023-07-21 00:14:38,660 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-21 00:14:38,660 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 6db26a1d63f39d73c9bd0b8b1bdcbe9c 2023-07-21 00:14:38,660 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689898478444.6db26a1d63f39d73c9bd0b8b1bdcbe9c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:38,660 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6db26a1d63f39d73c9bd0b8b1bdcbe9c 2023-07-21 00:14:38,660 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6db26a1d63f39d73c9bd0b8b1bdcbe9c 2023-07-21 00:14:38,661 DEBUG [StoreOpener-8be21d62fa1ab51031e13c905ca09e16-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/namespace/8be21d62fa1ab51031e13c905ca09e16/info 2023-07-21 00:14:38,661 DEBUG [StoreOpener-8be21d62fa1ab51031e13c905ca09e16-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/namespace/8be21d62fa1ab51031e13c905ca09e16/info 2023-07-21 00:14:38,662 INFO [StoreOpener-6db26a1d63f39d73c9bd0b8b1bdcbe9c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 6db26a1d63f39d73c9bd0b8b1bdcbe9c 2023-07-21 00:14:38,662 INFO [StoreOpener-8be21d62fa1ab51031e13c905ca09e16-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8be21d62fa1ab51031e13c905ca09e16 columnFamilyName info 2023-07-21 00:14:38,663 INFO [StoreOpener-8be21d62fa1ab51031e13c905ca09e16-1] regionserver.HStore(310): Store=8be21d62fa1ab51031e13c905ca09e16/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:38,663 DEBUG [StoreOpener-6db26a1d63f39d73c9bd0b8b1bdcbe9c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/rsgroup/6db26a1d63f39d73c9bd0b8b1bdcbe9c/m 2023-07-21 00:14:38,663 DEBUG [StoreOpener-6db26a1d63f39d73c9bd0b8b1bdcbe9c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/rsgroup/6db26a1d63f39d73c9bd0b8b1bdcbe9c/m 2023-07-21 00:14:38,663 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/namespace/8be21d62fa1ab51031e13c905ca09e16 2023-07-21 00:14:38,664 INFO [StoreOpener-6db26a1d63f39d73c9bd0b8b1bdcbe9c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6db26a1d63f39d73c9bd0b8b1bdcbe9c columnFamilyName m 2023-07-21 00:14:38,664 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/namespace/8be21d62fa1ab51031e13c905ca09e16 2023-07-21 00:14:38,664 INFO [StoreOpener-6db26a1d63f39d73c9bd0b8b1bdcbe9c-1] regionserver.HStore(310): Store=6db26a1d63f39d73c9bd0b8b1bdcbe9c/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:38,665 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/rsgroup/6db26a1d63f39d73c9bd0b8b1bdcbe9c 2023-07-21 00:14:38,665 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/rsgroup/6db26a1d63f39d73c9bd0b8b1bdcbe9c 2023-07-21 00:14:38,671 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8be21d62fa1ab51031e13c905ca09e16 2023-07-21 00:14:38,673 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6db26a1d63f39d73c9bd0b8b1bdcbe9c 2023-07-21 00:14:38,674 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/namespace/8be21d62fa1ab51031e13c905ca09e16/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 00:14:38,675 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8be21d62fa1ab51031e13c905ca09e16; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10768267360, jitterRate=0.002873048186302185}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 00:14:38,675 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8be21d62fa1ab51031e13c905ca09e16: 2023-07-21 00:14:38,677 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/rsgroup/6db26a1d63f39d73c9bd0b8b1bdcbe9c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 00:14:38,677 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689898478344.8be21d62fa1ab51031e13c905ca09e16., pid=8, masterSystemTime=1689898478643 2023-07-21 00:14:38,677 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6db26a1d63f39d73c9bd0b8b1bdcbe9c; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@4141342a, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 00:14:38,677 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6db26a1d63f39d73c9bd0b8b1bdcbe9c: 2023-07-21 00:14:38,679 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689898478444.6db26a1d63f39d73c9bd0b8b1bdcbe9c., pid=9, masterSystemTime=1689898478644 2023-07-21 00:14:38,681 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689898478344.8be21d62fa1ab51031e13c905ca09e16. 2023-07-21 00:14:38,681 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689898478344.8be21d62fa1ab51031e13c905ca09e16. 2023-07-21 00:14:38,682 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=8be21d62fa1ab51031e13c905ca09e16, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43031,1689898477515 2023-07-21 00:14:38,682 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689898478344.8be21d62fa1ab51031e13c905ca09e16.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689898478682"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689898478682"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689898478682"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689898478682"}]},"ts":"1689898478682"} 2023-07-21 00:14:38,682 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689898478444.6db26a1d63f39d73c9bd0b8b1bdcbe9c. 2023-07-21 00:14:38,683 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689898478444.6db26a1d63f39d73c9bd0b8b1bdcbe9c. 2023-07-21 00:14:38,683 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=6db26a1d63f39d73c9bd0b8b1bdcbe9c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37637,1689898477123 2023-07-21 00:14:38,683 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689898478444.6db26a1d63f39d73c9bd0b8b1bdcbe9c.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689898478683"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689898478683"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689898478683"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689898478683"}]},"ts":"1689898478683"} 2023-07-21 00:14:38,687 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=5 2023-07-21 00:14:38,687 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-21 00:14:38,687 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=5, state=SUCCESS; OpenRegionProcedure 8be21d62fa1ab51031e13c905ca09e16, server=jenkins-hbase4.apache.org,43031,1689898477515 in 193 msec 2023-07-21 00:14:38,687 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 6db26a1d63f39d73c9bd0b8b1bdcbe9c, server=jenkins-hbase4.apache.org,37637,1689898477123 in 194 msec 2023-07-21 00:14:38,688 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-21 00:14:38,688 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=6 2023-07-21 00:14:38,688 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=8be21d62fa1ab51031e13c905ca09e16, ASSIGN in 300 msec 2023-07-21 00:14:38,688 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=6db26a1d63f39d73c9bd0b8b1bdcbe9c, ASSIGN in 203 msec 2023-07-21 00:14:38,689 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 00:14:38,689 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 00:14:38,689 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689898478689"}]},"ts":"1689898478689"} 2023-07-21 00:14:38,689 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689898478689"}]},"ts":"1689898478689"} 2023-07-21 00:14:38,690 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-21 00:14:38,691 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-21 00:14:38,692 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 00:14:38,694 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 347 msec 2023-07-21 00:14:38,698 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 00:14:38,699 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 254 msec 2023-07-21 00:14:38,718 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-21 00:14:38,718 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.quotas.MasterQuotasObserver Metrics about HBase MasterObservers 2023-07-21 00:14:38,747 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36667-0x101853ae84c0000, quorum=127.0.0.1:63294, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-21 00:14:38,748 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): master:36667-0x101853ae84c0000, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-21 00:14:38,749 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): master:36667-0x101853ae84c0000, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 00:14:38,751 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36667,1689898476899] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 00:14:38,752 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59776, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 00:14:38,756 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36667,1689898476899] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-21 00:14:38,756 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36667,1689898476899] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-21 00:14:38,756 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-21 00:14:38,761 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): master:36667-0x101853ae84c0000, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 00:14:38,761 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36667,1689898476899] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:38,762 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36667,1689898476899] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 00:14:38,765 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): master:36667-0x101853ae84c0000, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 00:14:38,765 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36667,1689898476899] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-21 00:14:38,767 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 12 msec 2023-07-21 00:14:38,778 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-21 00:14:38,785 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): master:36667-0x101853ae84c0000, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 00:14:38,788 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 10 msec 2023-07-21 00:14:38,792 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): master:36667-0x101853ae84c0000, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-21 00:14:38,796 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): master:36667-0x101853ae84c0000, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-21 00:14:38,796 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.113sec 2023-07-21 00:14:38,796 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-21 00:14:38,796 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 00:14:38,797 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-21 00:14:38,797 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-21 00:14:38,798 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 00:14:38,799 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 00:14:38,800 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-21 00:14:38,800 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/.tmp/data/hbase/quota/4eaf86570299228c2bfde0d0540f3b37 2023-07-21 00:14:38,801 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/.tmp/data/hbase/quota/4eaf86570299228c2bfde0d0540f3b37 empty. 2023-07-21 00:14:38,801 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/.tmp/data/hbase/quota/4eaf86570299228c2bfde0d0540f3b37 2023-07-21 00:14:38,801 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-21 00:14:38,805 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-21 00:14:38,805 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-21 00:14:38,807 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:38,807 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:38,807 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-21 00:14:38,807 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-21 00:14:38,807 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36667,1689898476899-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-21 00:14:38,807 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36667,1689898476899-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-21 00:14:38,808 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-21 00:14:38,813 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-21 00:14:38,815 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => 4eaf86570299228c2bfde0d0540f3b37, NAME => 'hbase:quota,,1689898478796.4eaf86570299228c2bfde0d0540f3b37.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/.tmp 2023-07-21 00:14:38,823 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689898478796.4eaf86570299228c2bfde0d0540f3b37.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:38,824 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing 4eaf86570299228c2bfde0d0540f3b37, disabling compactions & flushes 2023-07-21 00:14:38,824 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689898478796.4eaf86570299228c2bfde0d0540f3b37. 2023-07-21 00:14:38,824 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689898478796.4eaf86570299228c2bfde0d0540f3b37. 2023-07-21 00:14:38,824 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689898478796.4eaf86570299228c2bfde0d0540f3b37. after waiting 0 ms 2023-07-21 00:14:38,824 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689898478796.4eaf86570299228c2bfde0d0540f3b37. 2023-07-21 00:14:38,824 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1689898478796.4eaf86570299228c2bfde0d0540f3b37. 2023-07-21 00:14:38,824 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for 4eaf86570299228c2bfde0d0540f3b37: 2023-07-21 00:14:38,826 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 00:14:38,827 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1689898478796.4eaf86570299228c2bfde0d0540f3b37.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689898478827"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898478827"}]},"ts":"1689898478827"} 2023-07-21 00:14:38,828 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 00:14:38,829 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 00:14:38,829 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689898478829"}]},"ts":"1689898478829"} 2023-07-21 00:14:38,830 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-21 00:14:38,832 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 00:14:38,832 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 00:14:38,832 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 00:14:38,832 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 00:14:38,833 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 00:14:38,833 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=4eaf86570299228c2bfde0d0540f3b37, ASSIGN}] 2023-07-21 00:14:38,833 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=4eaf86570299228c2bfde0d0540f3b37, ASSIGN 2023-07-21 00:14:38,834 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=4eaf86570299228c2bfde0d0540f3b37, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37637,1689898477123; forceNewPlan=false, retain=false 2023-07-21 00:14:38,877 DEBUG [Listener at localhost/38819] zookeeper.ReadOnlyZKClient(139): Connect 0x74b6aab4 to 127.0.0.1:63294 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 00:14:38,883 DEBUG [Listener at localhost/38819] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6d891beb, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 00:14:38,884 DEBUG [hconnection-0x4a6eb50a-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 00:14:38,886 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54680, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 00:14:38,887 INFO [Listener at localhost/38819] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,36667,1689898476899 2023-07-21 00:14:38,888 INFO [Listener at localhost/38819] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:38,890 DEBUG [Listener at localhost/38819] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-21 00:14:38,891 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55158, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-21 00:14:38,895 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): master:36667-0x101853ae84c0000, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-21 00:14:38,895 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): master:36667-0x101853ae84c0000, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 00:14:38,896 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36667] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-21 00:14:38,896 DEBUG [Listener at localhost/38819] zookeeper.ReadOnlyZKClient(139): Connect 0x6caa8057 to 127.0.0.1:63294 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 00:14:38,900 DEBUG [Listener at localhost/38819] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5353ae2d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 00:14:38,901 INFO [Listener at localhost/38819] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:63294 2023-07-21 00:14:38,903 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 00:14:38,906 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x101853ae84c000a connected 2023-07-21 00:14:38,908 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36667] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'np1', hbase.namespace.quota.maxregions => '5', hbase.namespace.quota.maxtables => '2'} 2023-07-21 00:14:38,909 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36667] procedure2.ProcedureExecutor(1029): Stored pid=14, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=np1 2023-07-21 00:14:38,915 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36667] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-21 00:14:38,920 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): master:36667-0x101853ae84c0000, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 00:14:38,923 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=14, state=SUCCESS; CreateNamespaceProcedure, namespace=np1 in 14 msec 2023-07-21 00:14:38,984 INFO [jenkins-hbase4:36667] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 00:14:38,986 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=4eaf86570299228c2bfde0d0540f3b37, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37637,1689898477123 2023-07-21 00:14:38,986 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689898478796.4eaf86570299228c2bfde0d0540f3b37.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689898478986"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898478986"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898478986"}]},"ts":"1689898478986"} 2023-07-21 00:14:38,989 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=13, state=RUNNABLE; OpenRegionProcedure 4eaf86570299228c2bfde0d0540f3b37, server=jenkins-hbase4.apache.org,37637,1689898477123}] 2023-07-21 00:14:39,016 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36667] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-21 00:14:39,021 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36667] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 00:14:39,023 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36667] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table1 2023-07-21 00:14:39,024 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 00:14:39,024 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36667] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table1" procId is: 16 2023-07-21 00:14:39,025 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36667] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-21 00:14:39,026 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:39,027 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 00:14:39,029 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 00:14:39,030 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/.tmp/data/np1/table1/c0975a5fe96a39af95f6a612c1bb182f 2023-07-21 00:14:39,031 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/.tmp/data/np1/table1/c0975a5fe96a39af95f6a612c1bb182f empty. 2023-07-21 00:14:39,031 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/.tmp/data/np1/table1/c0975a5fe96a39af95f6a612c1bb182f 2023-07-21 00:14:39,032 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-21 00:14:39,047 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/.tmp/data/np1/table1/.tabledesc/.tableinfo.0000000001 2023-07-21 00:14:39,048 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(7675): creating {ENCODED => c0975a5fe96a39af95f6a612c1bb182f, NAME => 'np1:table1,,1689898479021.c0975a5fe96a39af95f6a612c1bb182f.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/.tmp 2023-07-21 00:14:39,057 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(866): Instantiated np1:table1,,1689898479021.c0975a5fe96a39af95f6a612c1bb182f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:39,058 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1604): Closing c0975a5fe96a39af95f6a612c1bb182f, disabling compactions & flushes 2023-07-21 00:14:39,058 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1626): Closing region np1:table1,,1689898479021.c0975a5fe96a39af95f6a612c1bb182f. 2023-07-21 00:14:39,058 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689898479021.c0975a5fe96a39af95f6a612c1bb182f. 2023-07-21 00:14:39,058 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689898479021.c0975a5fe96a39af95f6a612c1bb182f. after waiting 0 ms 2023-07-21 00:14:39,058 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689898479021.c0975a5fe96a39af95f6a612c1bb182f. 2023-07-21 00:14:39,058 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1838): Closed np1:table1,,1689898479021.c0975a5fe96a39af95f6a612c1bb182f. 2023-07-21 00:14:39,058 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1558): Region close journal for c0975a5fe96a39af95f6a612c1bb182f: 2023-07-21 00:14:39,060 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 00:14:39,061 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"np1:table1,,1689898479021.c0975a5fe96a39af95f6a612c1bb182f.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689898479061"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898479061"}]},"ts":"1689898479061"} 2023-07-21 00:14:39,062 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 00:14:39,063 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 00:14:39,063 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689898479063"}]},"ts":"1689898479063"} 2023-07-21 00:14:39,064 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLING in hbase:meta 2023-07-21 00:14:39,067 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 00:14:39,067 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 00:14:39,067 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 00:14:39,067 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 00:14:39,068 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 00:14:39,068 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=c0975a5fe96a39af95f6a612c1bb182f, ASSIGN}] 2023-07-21 00:14:39,069 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=c0975a5fe96a39af95f6a612c1bb182f, ASSIGN 2023-07-21 00:14:39,069 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=np1:table1, region=c0975a5fe96a39af95f6a612c1bb182f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38311,1689898477336; forceNewPlan=false, retain=false 2023-07-21 00:14:39,127 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36667] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-21 00:14:39,145 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1689898478796.4eaf86570299228c2bfde0d0540f3b37. 2023-07-21 00:14:39,145 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4eaf86570299228c2bfde0d0540f3b37, NAME => 'hbase:quota,,1689898478796.4eaf86570299228c2bfde0d0540f3b37.', STARTKEY => '', ENDKEY => ''} 2023-07-21 00:14:39,145 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota 4eaf86570299228c2bfde0d0540f3b37 2023-07-21 00:14:39,145 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689898478796.4eaf86570299228c2bfde0d0540f3b37.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:39,145 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 4eaf86570299228c2bfde0d0540f3b37 2023-07-21 00:14:39,145 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 4eaf86570299228c2bfde0d0540f3b37 2023-07-21 00:14:39,146 INFO [StoreOpener-4eaf86570299228c2bfde0d0540f3b37-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region 4eaf86570299228c2bfde0d0540f3b37 2023-07-21 00:14:39,148 DEBUG [StoreOpener-4eaf86570299228c2bfde0d0540f3b37-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/quota/4eaf86570299228c2bfde0d0540f3b37/q 2023-07-21 00:14:39,148 DEBUG [StoreOpener-4eaf86570299228c2bfde0d0540f3b37-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/quota/4eaf86570299228c2bfde0d0540f3b37/q 2023-07-21 00:14:39,148 INFO [StoreOpener-4eaf86570299228c2bfde0d0540f3b37-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4eaf86570299228c2bfde0d0540f3b37 columnFamilyName q 2023-07-21 00:14:39,149 INFO [StoreOpener-4eaf86570299228c2bfde0d0540f3b37-1] regionserver.HStore(310): Store=4eaf86570299228c2bfde0d0540f3b37/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:39,149 INFO [StoreOpener-4eaf86570299228c2bfde0d0540f3b37-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region 4eaf86570299228c2bfde0d0540f3b37 2023-07-21 00:14:39,150 DEBUG [StoreOpener-4eaf86570299228c2bfde0d0540f3b37-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/quota/4eaf86570299228c2bfde0d0540f3b37/u 2023-07-21 00:14:39,150 DEBUG [StoreOpener-4eaf86570299228c2bfde0d0540f3b37-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/quota/4eaf86570299228c2bfde0d0540f3b37/u 2023-07-21 00:14:39,151 INFO [StoreOpener-4eaf86570299228c2bfde0d0540f3b37-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4eaf86570299228c2bfde0d0540f3b37 columnFamilyName u 2023-07-21 00:14:39,151 INFO [StoreOpener-4eaf86570299228c2bfde0d0540f3b37-1] regionserver.HStore(310): Store=4eaf86570299228c2bfde0d0540f3b37/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:39,152 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/quota/4eaf86570299228c2bfde0d0540f3b37 2023-07-21 00:14:39,152 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/quota/4eaf86570299228c2bfde0d0540f3b37 2023-07-21 00:14:39,154 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-21 00:14:39,155 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 4eaf86570299228c2bfde0d0540f3b37 2023-07-21 00:14:39,157 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/quota/4eaf86570299228c2bfde0d0540f3b37/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 00:14:39,157 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 4eaf86570299228c2bfde0d0540f3b37; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11354676960, jitterRate=0.057486698031425476}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-21 00:14:39,157 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 4eaf86570299228c2bfde0d0540f3b37: 2023-07-21 00:14:39,158 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1689898478796.4eaf86570299228c2bfde0d0540f3b37., pid=15, masterSystemTime=1689898479141 2023-07-21 00:14:39,160 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1689898478796.4eaf86570299228c2bfde0d0540f3b37. 2023-07-21 00:14:39,160 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1689898478796.4eaf86570299228c2bfde0d0540f3b37. 2023-07-21 00:14:39,160 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=4eaf86570299228c2bfde0d0540f3b37, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37637,1689898477123 2023-07-21 00:14:39,160 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1689898478796.4eaf86570299228c2bfde0d0540f3b37.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689898479160"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689898479160"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689898479160"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689898479160"}]},"ts":"1689898479160"} 2023-07-21 00:14:39,163 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=15, resume processing ppid=13 2023-07-21 00:14:39,163 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=13, state=SUCCESS; OpenRegionProcedure 4eaf86570299228c2bfde0d0540f3b37, server=jenkins-hbase4.apache.org,37637,1689898477123 in 174 msec 2023-07-21 00:14:39,164 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-21 00:14:39,164 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=4eaf86570299228c2bfde0d0540f3b37, ASSIGN in 330 msec 2023-07-21 00:14:39,165 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 00:14:39,165 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689898479165"}]},"ts":"1689898479165"} 2023-07-21 00:14:39,166 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-21 00:14:39,168 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 00:14:39,170 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=hbase:quota in 372 msec 2023-07-21 00:14:39,220 INFO [jenkins-hbase4:36667] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 00:14:39,221 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=c0975a5fe96a39af95f6a612c1bb182f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38311,1689898477336 2023-07-21 00:14:39,221 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689898479021.c0975a5fe96a39af95f6a612c1bb182f.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689898479221"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898479221"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898479221"}]},"ts":"1689898479221"} 2023-07-21 00:14:39,223 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; OpenRegionProcedure c0975a5fe96a39af95f6a612c1bb182f, server=jenkins-hbase4.apache.org,38311,1689898477336}] 2023-07-21 00:14:39,328 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36667] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-21 00:14:39,375 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,38311,1689898477336 2023-07-21 00:14:39,376 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 00:14:39,377 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51212, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 00:14:39,382 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open np1:table1,,1689898479021.c0975a5fe96a39af95f6a612c1bb182f. 2023-07-21 00:14:39,382 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c0975a5fe96a39af95f6a612c1bb182f, NAME => 'np1:table1,,1689898479021.c0975a5fe96a39af95f6a612c1bb182f.', STARTKEY => '', ENDKEY => ''} 2023-07-21 00:14:39,383 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table table1 c0975a5fe96a39af95f6a612c1bb182f 2023-07-21 00:14:39,383 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated np1:table1,,1689898479021.c0975a5fe96a39af95f6a612c1bb182f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:39,383 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c0975a5fe96a39af95f6a612c1bb182f 2023-07-21 00:14:39,383 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c0975a5fe96a39af95f6a612c1bb182f 2023-07-21 00:14:39,384 INFO [StoreOpener-c0975a5fe96a39af95f6a612c1bb182f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family fam1 of region c0975a5fe96a39af95f6a612c1bb182f 2023-07-21 00:14:39,385 DEBUG [StoreOpener-c0975a5fe96a39af95f6a612c1bb182f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/np1/table1/c0975a5fe96a39af95f6a612c1bb182f/fam1 2023-07-21 00:14:39,386 DEBUG [StoreOpener-c0975a5fe96a39af95f6a612c1bb182f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/np1/table1/c0975a5fe96a39af95f6a612c1bb182f/fam1 2023-07-21 00:14:39,386 INFO [StoreOpener-c0975a5fe96a39af95f6a612c1bb182f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c0975a5fe96a39af95f6a612c1bb182f columnFamilyName fam1 2023-07-21 00:14:39,386 INFO [StoreOpener-c0975a5fe96a39af95f6a612c1bb182f-1] regionserver.HStore(310): Store=c0975a5fe96a39af95f6a612c1bb182f/fam1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:39,387 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/np1/table1/c0975a5fe96a39af95f6a612c1bb182f 2023-07-21 00:14:39,388 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/np1/table1/c0975a5fe96a39af95f6a612c1bb182f 2023-07-21 00:14:39,390 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c0975a5fe96a39af95f6a612c1bb182f 2023-07-21 00:14:39,393 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/np1/table1/c0975a5fe96a39af95f6a612c1bb182f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 00:14:39,394 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c0975a5fe96a39af95f6a612c1bb182f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10773147520, jitterRate=0.0033275485038757324}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 00:14:39,394 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c0975a5fe96a39af95f6a612c1bb182f: 2023-07-21 00:14:39,395 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for np1:table1,,1689898479021.c0975a5fe96a39af95f6a612c1bb182f., pid=18, masterSystemTime=1689898479375 2023-07-21 00:14:39,398 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for np1:table1,,1689898479021.c0975a5fe96a39af95f6a612c1bb182f. 2023-07-21 00:14:39,398 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened np1:table1,,1689898479021.c0975a5fe96a39af95f6a612c1bb182f. 2023-07-21 00:14:39,399 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=c0975a5fe96a39af95f6a612c1bb182f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38311,1689898477336 2023-07-21 00:14:39,399 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"np1:table1,,1689898479021.c0975a5fe96a39af95f6a612c1bb182f.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689898479399"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689898479399"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689898479399"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689898479399"}]},"ts":"1689898479399"} 2023-07-21 00:14:39,401 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-21 00:14:39,402 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; OpenRegionProcedure c0975a5fe96a39af95f6a612c1bb182f, server=jenkins-hbase4.apache.org,38311,1689898477336 in 177 msec 2023-07-21 00:14:39,403 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-21 00:14:39,403 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=c0975a5fe96a39af95f6a612c1bb182f, ASSIGN in 334 msec 2023-07-21 00:14:39,404 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 00:14:39,404 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689898479404"}]},"ts":"1689898479404"} 2023-07-21 00:14:39,405 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLED in hbase:meta 2023-07-21 00:14:39,407 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 00:14:39,408 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; CreateTableProcedure table=np1:table1 in 386 msec 2023-07-21 00:14:39,629 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36667] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-21 00:14:39,629 INFO [Listener at localhost/38819] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: np1:table1, procId: 16 completed 2023-07-21 00:14:39,631 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36667] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table2', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 00:14:39,632 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36667] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table2 2023-07-21 00:14:39,634 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table2 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 00:14:39,634 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36667] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table2" procId is: 19 2023-07-21 00:14:39,635 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36667] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-21 00:14:39,653 INFO [PEWorker-3] procedure2.ProcedureExecutor(1528): Rolled back pid=19, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.quotas.QuotaExceededException via master-create-table:org.apache.hadoop.hbase.quotas.QuotaExceededException: The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace.; CreateTableProcedure table=np1:table2 exec-time=21 msec 2023-07-21 00:14:39,736 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36667] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-21 00:14:39,739 INFO [Listener at localhost/38819] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: np1:table2, procId: 19 failed with The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace. 2023-07-21 00:14:39,739 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36667] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:39,740 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36667] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:39,741 INFO [Listener at localhost/38819] client.HBaseAdmin$15(890): Started disable of np1:table1 2023-07-21 00:14:39,741 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36667] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable np1:table1 2023-07-21 00:14:39,742 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36667] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=np1:table1 2023-07-21 00:14:39,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36667] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-21 00:14:39,744 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689898479744"}]},"ts":"1689898479744"} 2023-07-21 00:14:39,745 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLING in hbase:meta 2023-07-21 00:14:39,747 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set np1:table1 to state=DISABLING 2023-07-21 00:14:39,748 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=c0975a5fe96a39af95f6a612c1bb182f, UNASSIGN}] 2023-07-21 00:14:39,748 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=c0975a5fe96a39af95f6a612c1bb182f, UNASSIGN 2023-07-21 00:14:39,749 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=c0975a5fe96a39af95f6a612c1bb182f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38311,1689898477336 2023-07-21 00:14:39,749 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689898479021.c0975a5fe96a39af95f6a612c1bb182f.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689898479749"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898479749"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898479749"}]},"ts":"1689898479749"} 2023-07-21 00:14:39,750 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; CloseRegionProcedure c0975a5fe96a39af95f6a612c1bb182f, server=jenkins-hbase4.apache.org,38311,1689898477336}] 2023-07-21 00:14:39,845 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36667] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-21 00:14:39,853 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-21 00:14:39,903 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close c0975a5fe96a39af95f6a612c1bb182f 2023-07-21 00:14:39,904 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c0975a5fe96a39af95f6a612c1bb182f, disabling compactions & flushes 2023-07-21 00:14:39,904 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region np1:table1,,1689898479021.c0975a5fe96a39af95f6a612c1bb182f. 2023-07-21 00:14:39,904 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689898479021.c0975a5fe96a39af95f6a612c1bb182f. 2023-07-21 00:14:39,904 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689898479021.c0975a5fe96a39af95f6a612c1bb182f. after waiting 0 ms 2023-07-21 00:14:39,904 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689898479021.c0975a5fe96a39af95f6a612c1bb182f. 2023-07-21 00:14:39,909 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/np1/table1/c0975a5fe96a39af95f6a612c1bb182f/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 00:14:39,911 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed np1:table1,,1689898479021.c0975a5fe96a39af95f6a612c1bb182f. 2023-07-21 00:14:39,911 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c0975a5fe96a39af95f6a612c1bb182f: 2023-07-21 00:14:39,912 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed c0975a5fe96a39af95f6a612c1bb182f 2023-07-21 00:14:39,915 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=c0975a5fe96a39af95f6a612c1bb182f, regionState=CLOSED 2023-07-21 00:14:39,915 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"np1:table1,,1689898479021.c0975a5fe96a39af95f6a612c1bb182f.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689898479915"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898479915"}]},"ts":"1689898479915"} 2023-07-21 00:14:39,918 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-21 00:14:39,918 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; CloseRegionProcedure c0975a5fe96a39af95f6a612c1bb182f, server=jenkins-hbase4.apache.org,38311,1689898477336 in 166 msec 2023-07-21 00:14:39,920 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-21 00:14:39,920 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=c0975a5fe96a39af95f6a612c1bb182f, UNASSIGN in 171 msec 2023-07-21 00:14:39,921 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689898479921"}]},"ts":"1689898479921"} 2023-07-21 00:14:39,922 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLED in hbase:meta 2023-07-21 00:14:39,925 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set np1:table1 to state=DISABLED 2023-07-21 00:14:39,935 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; DisableTableProcedure table=np1:table1 in 185 msec 2023-07-21 00:14:40,046 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36667] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-21 00:14:40,047 INFO [Listener at localhost/38819] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: np1:table1, procId: 20 completed 2023-07-21 00:14:40,047 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36667] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete np1:table1 2023-07-21 00:14:40,048 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36667] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=np1:table1 2023-07-21 00:14:40,050 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-21 00:14:40,050 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36667] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'np1:table1' from rsgroup 'default' 2023-07-21 00:14:40,051 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=23, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=np1:table1 2023-07-21 00:14:40,052 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36667] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:40,053 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36667] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 00:14:40,055 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/.tmp/data/np1/table1/c0975a5fe96a39af95f6a612c1bb182f 2023-07-21 00:14:40,056 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/.tmp/data/np1/table1/c0975a5fe96a39af95f6a612c1bb182f/fam1, FileablePath, hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/.tmp/data/np1/table1/c0975a5fe96a39af95f6a612c1bb182f/recovered.edits] 2023-07-21 00:14:40,057 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36667] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-21 00:14:40,063 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/.tmp/data/np1/table1/c0975a5fe96a39af95f6a612c1bb182f/recovered.edits/4.seqid to hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/archive/data/np1/table1/c0975a5fe96a39af95f6a612c1bb182f/recovered.edits/4.seqid 2023-07-21 00:14:40,064 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/.tmp/data/np1/table1/c0975a5fe96a39af95f6a612c1bb182f 2023-07-21 00:14:40,064 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-21 00:14:40,073 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=23, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=np1:table1 2023-07-21 00:14:40,075 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of np1:table1 from hbase:meta 2023-07-21 00:14:40,077 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'np1:table1' descriptor. 2023-07-21 00:14:40,079 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=23, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=np1:table1 2023-07-21 00:14:40,079 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'np1:table1' from region states. 2023-07-21 00:14:40,079 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1,,1689898479021.c0975a5fe96a39af95f6a612c1bb182f.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689898480079"}]},"ts":"9223372036854775807"} 2023-07-21 00:14:40,080 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-21 00:14:40,080 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => c0975a5fe96a39af95f6a612c1bb182f, NAME => 'np1:table1,,1689898479021.c0975a5fe96a39af95f6a612c1bb182f.', STARTKEY => '', ENDKEY => ''}] 2023-07-21 00:14:40,081 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'np1:table1' as deleted. 2023-07-21 00:14:40,081 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689898480081"}]},"ts":"9223372036854775807"} 2023-07-21 00:14:40,082 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table np1:table1 state from META 2023-07-21 00:14:40,085 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=23, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-21 00:14:40,087 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DeleteTableProcedure table=np1:table1 in 38 msec 2023-07-21 00:14:40,158 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36667] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-21 00:14:40,158 INFO [Listener at localhost/38819] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: np1:table1, procId: 23 completed 2023-07-21 00:14:40,163 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36667] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete np1 2023-07-21 00:14:40,170 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36667] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=np1 2023-07-21 00:14:40,172 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-21 00:14:40,175 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-21 00:14:40,177 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-21 00:14:40,178 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36667] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-21 00:14:40,180 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): master:36667-0x101853ae84c0000, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/np1 2023-07-21 00:14:40,180 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): master:36667-0x101853ae84c0000, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 00:14:40,181 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-21 00:14:40,183 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-21 00:14:40,184 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; DeleteNamespaceProcedure, namespace=np1 in 19 msec 2023-07-21 00:14:40,279 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36667] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-21 00:14:40,279 INFO [Listener at localhost/38819] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-21 00:14:40,279 INFO [Listener at localhost/38819] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-21 00:14:40,280 DEBUG [Listener at localhost/38819] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x74b6aab4 to 127.0.0.1:63294 2023-07-21 00:14:40,280 DEBUG [Listener at localhost/38819] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 00:14:40,280 DEBUG [Listener at localhost/38819] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-21 00:14:40,280 DEBUG [Listener at localhost/38819] util.JVMClusterUtil(257): Found active master hash=1166598135, stopped=false 2023-07-21 00:14:40,280 DEBUG [Listener at localhost/38819] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-21 00:14:40,280 DEBUG [Listener at localhost/38819] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-21 00:14:40,280 DEBUG [Listener at localhost/38819] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-21 00:14:40,280 INFO [Listener at localhost/38819] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,36667,1689898476899 2023-07-21 00:14:40,282 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): regionserver:37637-0x101853ae84c0001, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 00:14:40,282 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): regionserver:43031-0x101853ae84c0003, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 00:14:40,282 INFO [Listener at localhost/38819] procedure2.ProcedureExecutor(629): Stopping 2023-07-21 00:14:40,282 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): regionserver:38311-0x101853ae84c0002, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 00:14:40,282 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): master:36667-0x101853ae84c0000, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 00:14:40,282 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): master:36667-0x101853ae84c0000, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 00:14:40,283 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37637-0x101853ae84c0001, quorum=127.0.0.1:63294, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 00:14:40,284 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43031-0x101853ae84c0003, quorum=127.0.0.1:63294, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 00:14:40,284 DEBUG [Listener at localhost/38819] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2a1ba7a2 to 127.0.0.1:63294 2023-07-21 00:14:40,284 DEBUG [Listener at localhost/38819] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 00:14:40,284 INFO [Listener at localhost/38819] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,37637,1689898477123' ***** 2023-07-21 00:14:40,284 INFO [Listener at localhost/38819] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 00:14:40,284 INFO [Listener at localhost/38819] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,38311,1689898477336' ***** 2023-07-21 00:14:40,284 INFO [Listener at localhost/38819] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 00:14:40,284 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38311-0x101853ae84c0002, quorum=127.0.0.1:63294, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 00:14:40,284 INFO [RS:1;jenkins-hbase4:38311] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 00:14:40,284 INFO [Listener at localhost/38819] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,43031,1689898477515' ***** 2023-07-21 00:14:40,285 INFO [Listener at localhost/38819] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 00:14:40,284 INFO [RS:0;jenkins-hbase4:37637] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 00:14:40,284 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:36667-0x101853ae84c0000, quorum=127.0.0.1:63294, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 00:14:40,285 INFO [RS:2;jenkins-hbase4:43031] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 00:14:40,295 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 00:14:40,293 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 00:14:40,291 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 00:14:40,291 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 00:14:40,291 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 00:14:40,297 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 00:14:40,303 INFO [RS:1;jenkins-hbase4:38311] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3f6bc563{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 00:14:40,303 INFO [RS:0;jenkins-hbase4:37637] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@172af1a9{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 00:14:40,304 INFO [RS:1;jenkins-hbase4:38311] server.AbstractConnector(383): Stopped ServerConnector@40e93b12{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 00:14:40,304 INFO [RS:0;jenkins-hbase4:37637] server.AbstractConnector(383): Stopped ServerConnector@39caab19{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 00:14:40,305 INFO [RS:1;jenkins-hbase4:38311] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 00:14:40,305 INFO [RS:0;jenkins-hbase4:37637] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 00:14:40,305 INFO [RS:1;jenkins-hbase4:38311] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@75145d60{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 00:14:40,308 INFO [RS:0;jenkins-hbase4:37637] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@76bebe84{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 00:14:40,308 INFO [RS:1;jenkins-hbase4:38311] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@20989925{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3eb20d03-240b-b752-ba9a-24012c7c801e/hadoop.log.dir/,STOPPED} 2023-07-21 00:14:40,308 INFO [RS:0;jenkins-hbase4:37637] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3dd08807{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3eb20d03-240b-b752-ba9a-24012c7c801e/hadoop.log.dir/,STOPPED} 2023-07-21 00:14:40,308 INFO [RS:2;jenkins-hbase4:43031] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@156ae7ad{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 00:14:40,309 INFO [RS:1;jenkins-hbase4:38311] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 00:14:40,309 INFO [RS:1;jenkins-hbase4:38311] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 00:14:40,309 INFO [RS:1;jenkins-hbase4:38311] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 00:14:40,309 INFO [RS:2;jenkins-hbase4:43031] server.AbstractConnector(383): Stopped ServerConnector@5fd1460f{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 00:14:40,309 INFO [RS:1;jenkins-hbase4:38311] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,38311,1689898477336 2023-07-21 00:14:40,309 INFO [RS:0;jenkins-hbase4:37637] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 00:14:40,309 DEBUG [RS:1;jenkins-hbase4:38311] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x42a5f925 to 127.0.0.1:63294 2023-07-21 00:14:40,309 INFO [RS:2;jenkins-hbase4:43031] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 00:14:40,309 DEBUG [RS:1;jenkins-hbase4:38311] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 00:14:40,309 INFO [RS:2;jenkins-hbase4:43031] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@329690c5{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 00:14:40,310 INFO [RS:2;jenkins-hbase4:43031] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@24cfa0e0{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3eb20d03-240b-b752-ba9a-24012c7c801e/hadoop.log.dir/,STOPPED} 2023-07-21 00:14:40,309 INFO [RS:0;jenkins-hbase4:37637] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 00:14:40,309 INFO [RS:1;jenkins-hbase4:38311] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,38311,1689898477336; all regions closed. 2023-07-21 00:14:40,310 INFO [RS:0;jenkins-hbase4:37637] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 00:14:40,310 DEBUG [RS:1;jenkins-hbase4:38311] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-21 00:14:40,310 INFO [RS:0;jenkins-hbase4:37637] regionserver.HRegionServer(3305): Received CLOSE for 6db26a1d63f39d73c9bd0b8b1bdcbe9c 2023-07-21 00:14:40,311 INFO [RS:0;jenkins-hbase4:37637] regionserver.HRegionServer(3305): Received CLOSE for 4eaf86570299228c2bfde0d0540f3b37 2023-07-21 00:14:40,311 INFO [RS:0;jenkins-hbase4:37637] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37637,1689898477123 2023-07-21 00:14:40,311 DEBUG [RS:0;jenkins-hbase4:37637] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x246c343d to 127.0.0.1:63294 2023-07-21 00:14:40,311 DEBUG [RS:0;jenkins-hbase4:37637] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 00:14:40,312 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6db26a1d63f39d73c9bd0b8b1bdcbe9c, disabling compactions & flushes 2023-07-21 00:14:40,312 INFO [RS:0;jenkins-hbase4:37637] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-21 00:14:40,311 INFO [RS:2;jenkins-hbase4:43031] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 00:14:40,312 DEBUG [RS:0;jenkins-hbase4:37637] regionserver.HRegionServer(1478): Online Regions={6db26a1d63f39d73c9bd0b8b1bdcbe9c=hbase:rsgroup,,1689898478444.6db26a1d63f39d73c9bd0b8b1bdcbe9c., 4eaf86570299228c2bfde0d0540f3b37=hbase:quota,,1689898478796.4eaf86570299228c2bfde0d0540f3b37.} 2023-07-21 00:14:40,312 INFO [RS:2;jenkins-hbase4:43031] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 00:14:40,312 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689898478444.6db26a1d63f39d73c9bd0b8b1bdcbe9c. 2023-07-21 00:14:40,312 DEBUG [RS:0;jenkins-hbase4:37637] regionserver.HRegionServer(1504): Waiting on 4eaf86570299228c2bfde0d0540f3b37, 6db26a1d63f39d73c9bd0b8b1bdcbe9c 2023-07-21 00:14:40,312 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689898478444.6db26a1d63f39d73c9bd0b8b1bdcbe9c. 2023-07-21 00:14:40,312 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689898478444.6db26a1d63f39d73c9bd0b8b1bdcbe9c. after waiting 0 ms 2023-07-21 00:14:40,312 INFO [RS:2;jenkins-hbase4:43031] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 00:14:40,312 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689898478444.6db26a1d63f39d73c9bd0b8b1bdcbe9c. 2023-07-21 00:14:40,312 INFO [RS:2;jenkins-hbase4:43031] regionserver.HRegionServer(3305): Received CLOSE for 8be21d62fa1ab51031e13c905ca09e16 2023-07-21 00:14:40,312 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 6db26a1d63f39d73c9bd0b8b1bdcbe9c 1/1 column families, dataSize=585 B heapSize=1.04 KB 2023-07-21 00:14:40,312 INFO [RS:2;jenkins-hbase4:43031] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,43031,1689898477515 2023-07-21 00:14:40,312 DEBUG [RS:2;jenkins-hbase4:43031] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x72b98561 to 127.0.0.1:63294 2023-07-21 00:14:40,313 DEBUG [RS:2;jenkins-hbase4:43031] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 00:14:40,313 INFO [RS:2;jenkins-hbase4:43031] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 00:14:40,313 INFO [RS:2;jenkins-hbase4:43031] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 00:14:40,313 INFO [RS:2;jenkins-hbase4:43031] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 00:14:40,314 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8be21d62fa1ab51031e13c905ca09e16, disabling compactions & flushes 2023-07-21 00:14:40,314 INFO [RS:2;jenkins-hbase4:43031] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-21 00:14:40,314 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689898478344.8be21d62fa1ab51031e13c905ca09e16. 2023-07-21 00:14:40,314 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689898478344.8be21d62fa1ab51031e13c905ca09e16. 2023-07-21 00:14:40,314 INFO [RS:2;jenkins-hbase4:43031] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-21 00:14:40,316 DEBUG [RS:2;jenkins-hbase4:43031] regionserver.HRegionServer(1478): Online Regions={8be21d62fa1ab51031e13c905ca09e16=hbase:namespace,,1689898478344.8be21d62fa1ab51031e13c905ca09e16., 1588230740=hbase:meta,,1.1588230740} 2023-07-21 00:14:40,316 DEBUG [RS:2;jenkins-hbase4:43031] regionserver.HRegionServer(1504): Waiting on 1588230740, 8be21d62fa1ab51031e13c905ca09e16 2023-07-21 00:14:40,316 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689898478344.8be21d62fa1ab51031e13c905ca09e16. after waiting 0 ms 2023-07-21 00:14:40,318 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689898478344.8be21d62fa1ab51031e13c905ca09e16. 2023-07-21 00:14:40,318 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 8be21d62fa1ab51031e13c905ca09e16 1/1 column families, dataSize=215 B heapSize=776 B 2023-07-21 00:14:40,318 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 00:14:40,318 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 00:14:40,318 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 00:14:40,318 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 00:14:40,319 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 00:14:40,320 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=5.89 KB heapSize=11.09 KB 2023-07-21 00:14:40,325 DEBUG [RS:1;jenkins-hbase4:38311] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/oldWALs 2023-07-21 00:14:40,325 INFO [RS:1;jenkins-hbase4:38311] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C38311%2C1689898477336:(num 1689898478132) 2023-07-21 00:14:40,325 DEBUG [RS:1;jenkins-hbase4:38311] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 00:14:40,325 INFO [RS:1;jenkins-hbase4:38311] regionserver.LeaseManager(133): Closed leases 2023-07-21 00:14:40,326 INFO [RS:1;jenkins-hbase4:38311] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 00:14:40,326 INFO [RS:1;jenkins-hbase4:38311] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 00:14:40,326 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 00:14:40,326 INFO [RS:1;jenkins-hbase4:38311] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 00:14:40,326 INFO [RS:1;jenkins-hbase4:38311] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 00:14:40,331 INFO [RS:1;jenkins-hbase4:38311] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:38311 2023-07-21 00:14:40,341 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=585 B at sequenceid=7 (bloomFilter=true), to=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/rsgroup/6db26a1d63f39d73c9bd0b8b1bdcbe9c/.tmp/m/e126f84d0f274c4c89ef8d1cbcfef08c 2023-07-21 00:14:40,343 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=215 B at sequenceid=8 (bloomFilter=true), to=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/namespace/8be21d62fa1ab51031e13c905ca09e16/.tmp/info/1a15873296d04e9eaf627c1b4c5ad956 2023-07-21 00:14:40,349 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.26 KB at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/meta/1588230740/.tmp/info/2b06b35d7805469ea4006fb900099535 2023-07-21 00:14:40,355 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1a15873296d04e9eaf627c1b4c5ad956 2023-07-21 00:14:40,356 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/namespace/8be21d62fa1ab51031e13c905ca09e16/.tmp/info/1a15873296d04e9eaf627c1b4c5ad956 as hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/namespace/8be21d62fa1ab51031e13c905ca09e16/info/1a15873296d04e9eaf627c1b4c5ad956 2023-07-21 00:14:40,359 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/rsgroup/6db26a1d63f39d73c9bd0b8b1bdcbe9c/.tmp/m/e126f84d0f274c4c89ef8d1cbcfef08c as hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/rsgroup/6db26a1d63f39d73c9bd0b8b1bdcbe9c/m/e126f84d0f274c4c89ef8d1cbcfef08c 2023-07-21 00:14:40,359 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2b06b35d7805469ea4006fb900099535 2023-07-21 00:14:40,366 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1a15873296d04e9eaf627c1b4c5ad956 2023-07-21 00:14:40,366 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/namespace/8be21d62fa1ab51031e13c905ca09e16/info/1a15873296d04e9eaf627c1b4c5ad956, entries=3, sequenceid=8, filesize=5.0 K 2023-07-21 00:14:40,369 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~215 B/215, heapSize ~760 B/760, currentSize=0 B/0 for 8be21d62fa1ab51031e13c905ca09e16 in 51ms, sequenceid=8, compaction requested=false 2023-07-21 00:14:40,369 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-21 00:14:40,378 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/rsgroup/6db26a1d63f39d73c9bd0b8b1bdcbe9c/m/e126f84d0f274c4c89ef8d1cbcfef08c, entries=1, sequenceid=7, filesize=4.9 K 2023-07-21 00:14:40,378 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/namespace/8be21d62fa1ab51031e13c905ca09e16/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=1 2023-07-21 00:14:40,379 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~585 B/585, heapSize ~1.02 KB/1048, currentSize=0 B/0 for 6db26a1d63f39d73c9bd0b8b1bdcbe9c in 66ms, sequenceid=7, compaction requested=false 2023-07-21 00:14:40,379 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-21 00:14:40,380 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689898478344.8be21d62fa1ab51031e13c905ca09e16. 2023-07-21 00:14:40,380 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8be21d62fa1ab51031e13c905ca09e16: 2023-07-21 00:14:40,380 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689898478344.8be21d62fa1ab51031e13c905ca09e16. 2023-07-21 00:14:40,385 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=90 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/meta/1588230740/.tmp/rep_barrier/018fade8efee4ac1915c2aca09fcde05 2023-07-21 00:14:40,386 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/rsgroup/6db26a1d63f39d73c9bd0b8b1bdcbe9c/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=1 2023-07-21 00:14:40,386 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 00:14:40,387 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689898478444.6db26a1d63f39d73c9bd0b8b1bdcbe9c. 2023-07-21 00:14:40,387 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6db26a1d63f39d73c9bd0b8b1bdcbe9c: 2023-07-21 00:14:40,387 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689898478444.6db26a1d63f39d73c9bd0b8b1bdcbe9c. 2023-07-21 00:14:40,388 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 4eaf86570299228c2bfde0d0540f3b37, disabling compactions & flushes 2023-07-21 00:14:40,388 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689898478796.4eaf86570299228c2bfde0d0540f3b37. 2023-07-21 00:14:40,388 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689898478796.4eaf86570299228c2bfde0d0540f3b37. 2023-07-21 00:14:40,388 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689898478796.4eaf86570299228c2bfde0d0540f3b37. after waiting 0 ms 2023-07-21 00:14:40,388 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689898478796.4eaf86570299228c2bfde0d0540f3b37. 2023-07-21 00:14:40,393 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 018fade8efee4ac1915c2aca09fcde05 2023-07-21 00:14:40,398 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/quota/4eaf86570299228c2bfde0d0540f3b37/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 00:14:40,399 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1689898478796.4eaf86570299228c2bfde0d0540f3b37. 2023-07-21 00:14:40,400 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 4eaf86570299228c2bfde0d0540f3b37: 2023-07-21 00:14:40,400 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1689898478796.4eaf86570299228c2bfde0d0540f3b37. 2023-07-21 00:14:40,411 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=562 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/meta/1588230740/.tmp/table/481b23ca94bf4184a5e4ba06b185ac4d 2023-07-21 00:14:40,418 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): regionserver:37637-0x101853ae84c0001, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38311,1689898477336 2023-07-21 00:14:40,418 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): regionserver:37637-0x101853ae84c0001, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 00:14:40,418 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): regionserver:38311-0x101853ae84c0002, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38311,1689898477336 2023-07-21 00:14:40,418 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): master:36667-0x101853ae84c0000, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 00:14:40,418 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): regionserver:38311-0x101853ae84c0002, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 00:14:40,418 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 481b23ca94bf4184a5e4ba06b185ac4d 2023-07-21 00:14:40,419 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): regionserver:43031-0x101853ae84c0003, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38311,1689898477336 2023-07-21 00:14:40,419 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): regionserver:43031-0x101853ae84c0003, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 00:14:40,420 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/meta/1588230740/.tmp/info/2b06b35d7805469ea4006fb900099535 as hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/meta/1588230740/info/2b06b35d7805469ea4006fb900099535 2023-07-21 00:14:40,420 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,38311,1689898477336] 2023-07-21 00:14:40,420 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,38311,1689898477336; numProcessing=1 2023-07-21 00:14:40,422 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,38311,1689898477336 already deleted, retry=false 2023-07-21 00:14:40,422 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,38311,1689898477336 expired; onlineServers=2 2023-07-21 00:14:40,427 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2b06b35d7805469ea4006fb900099535 2023-07-21 00:14:40,427 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/meta/1588230740/info/2b06b35d7805469ea4006fb900099535, entries=32, sequenceid=31, filesize=8.5 K 2023-07-21 00:14:40,428 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/meta/1588230740/.tmp/rep_barrier/018fade8efee4ac1915c2aca09fcde05 as hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/meta/1588230740/rep_barrier/018fade8efee4ac1915c2aca09fcde05 2023-07-21 00:14:40,435 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 018fade8efee4ac1915c2aca09fcde05 2023-07-21 00:14:40,435 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/meta/1588230740/rep_barrier/018fade8efee4ac1915c2aca09fcde05, entries=1, sequenceid=31, filesize=4.9 K 2023-07-21 00:14:40,436 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/meta/1588230740/.tmp/table/481b23ca94bf4184a5e4ba06b185ac4d as hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/meta/1588230740/table/481b23ca94bf4184a5e4ba06b185ac4d 2023-07-21 00:14:40,442 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 481b23ca94bf4184a5e4ba06b185ac4d 2023-07-21 00:14:40,443 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/meta/1588230740/table/481b23ca94bf4184a5e4ba06b185ac4d, entries=8, sequenceid=31, filesize=5.2 K 2023-07-21 00:14:40,445 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.89 KB/6036, heapSize ~11.05 KB/11312, currentSize=0 B/0 for 1588230740 in 126ms, sequenceid=31, compaction requested=false 2023-07-21 00:14:40,445 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-21 00:14:40,454 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/data/hbase/meta/1588230740/recovered.edits/34.seqid, newMaxSeqId=34, maxSeqId=1 2023-07-21 00:14:40,454 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 00:14:40,455 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 00:14:40,455 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 00:14:40,455 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-21 00:14:40,512 INFO [RS:0;jenkins-hbase4:37637] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37637,1689898477123; all regions closed. 2023-07-21 00:14:40,512 DEBUG [RS:0;jenkins-hbase4:37637] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-21 00:14:40,516 INFO [RS:2;jenkins-hbase4:43031] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,43031,1689898477515; all regions closed. 2023-07-21 00:14:40,517 DEBUG [RS:2;jenkins-hbase4:43031] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-21 00:14:40,521 DEBUG [RS:0;jenkins-hbase4:37637] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/oldWALs 2023-07-21 00:14:40,521 INFO [RS:0;jenkins-hbase4:37637] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C37637%2C1689898477123:(num 1689898478137) 2023-07-21 00:14:40,521 DEBUG [RS:0;jenkins-hbase4:37637] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 00:14:40,522 INFO [RS:0;jenkins-hbase4:37637] regionserver.LeaseManager(133): Closed leases 2023-07-21 00:14:40,522 INFO [RS:0;jenkins-hbase4:37637] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 00:14:40,522 INFO [RS:0;jenkins-hbase4:37637] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 00:14:40,522 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 00:14:40,522 INFO [RS:0;jenkins-hbase4:37637] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 00:14:40,522 INFO [RS:0;jenkins-hbase4:37637] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 00:14:40,523 INFO [RS:0;jenkins-hbase4:37637] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37637 2023-07-21 00:14:40,526 DEBUG [RS:2;jenkins-hbase4:43031] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/oldWALs 2023-07-21 00:14:40,526 INFO [RS:2;jenkins-hbase4:43031] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C43031%2C1689898477515.meta:.meta(num 1689898478282) 2023-07-21 00:14:40,527 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): regionserver:37637-0x101853ae84c0001, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37637,1689898477123 2023-07-21 00:14:40,527 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): regionserver:43031-0x101853ae84c0003, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37637,1689898477123 2023-07-21 00:14:40,527 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): master:36667-0x101853ae84c0000, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 00:14:40,528 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,37637,1689898477123] 2023-07-21 00:14:40,528 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,37637,1689898477123; numProcessing=2 2023-07-21 00:14:40,529 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,37637,1689898477123 already deleted, retry=false 2023-07-21 00:14:40,529 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,37637,1689898477123 expired; onlineServers=1 2023-07-21 00:14:40,535 DEBUG [RS:2;jenkins-hbase4:43031] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/oldWALs 2023-07-21 00:14:40,535 INFO [RS:2;jenkins-hbase4:43031] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C43031%2C1689898477515:(num 1689898478137) 2023-07-21 00:14:40,535 DEBUG [RS:2;jenkins-hbase4:43031] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 00:14:40,535 INFO [RS:2;jenkins-hbase4:43031] regionserver.LeaseManager(133): Closed leases 2023-07-21 00:14:40,535 INFO [RS:2;jenkins-hbase4:43031] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 00:14:40,535 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 00:14:40,536 INFO [RS:2;jenkins-hbase4:43031] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:43031 2023-07-21 00:14:40,542 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): regionserver:43031-0x101853ae84c0003, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43031,1689898477515 2023-07-21 00:14:40,542 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): master:36667-0x101853ae84c0000, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 00:14:40,543 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,43031,1689898477515] 2023-07-21 00:14:40,543 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,43031,1689898477515; numProcessing=3 2023-07-21 00:14:40,544 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,43031,1689898477515 already deleted, retry=false 2023-07-21 00:14:40,544 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,43031,1689898477515 expired; onlineServers=0 2023-07-21 00:14:40,544 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,36667,1689898476899' ***** 2023-07-21 00:14:40,544 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-21 00:14:40,545 DEBUG [M:0;jenkins-hbase4:36667] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@41b7539e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-21 00:14:40,545 INFO [M:0;jenkins-hbase4:36667] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 00:14:40,547 INFO [M:0;jenkins-hbase4:36667] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@28d2768{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-21 00:14:40,547 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): master:36667-0x101853ae84c0000, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-21 00:14:40,547 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): master:36667-0x101853ae84c0000, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 00:14:40,547 INFO [M:0;jenkins-hbase4:36667] server.AbstractConnector(383): Stopped ServerConnector@ed4524b{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 00:14:40,548 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:36667-0x101853ae84c0000, quorum=127.0.0.1:63294, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 00:14:40,548 INFO [M:0;jenkins-hbase4:36667] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 00:14:40,548 INFO [M:0;jenkins-hbase4:36667] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@551ae954{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 00:14:40,548 INFO [M:0;jenkins-hbase4:36667] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1890f436{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3eb20d03-240b-b752-ba9a-24012c7c801e/hadoop.log.dir/,STOPPED} 2023-07-21 00:14:40,548 INFO [M:0;jenkins-hbase4:36667] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,36667,1689898476899 2023-07-21 00:14:40,548 INFO [M:0;jenkins-hbase4:36667] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,36667,1689898476899; all regions closed. 2023-07-21 00:14:40,548 DEBUG [M:0;jenkins-hbase4:36667] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 00:14:40,548 INFO [M:0;jenkins-hbase4:36667] master.HMaster(1491): Stopping master jetty server 2023-07-21 00:14:40,549 INFO [M:0;jenkins-hbase4:36667] server.AbstractConnector(383): Stopped ServerConnector@6afc80d0{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 00:14:40,549 DEBUG [M:0;jenkins-hbase4:36667] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-21 00:14:40,550 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-21 00:14:40,550 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689898477857] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689898477857,5,FailOnTimeoutGroup] 2023-07-21 00:14:40,550 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689898477853] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689898477853,5,FailOnTimeoutGroup] 2023-07-21 00:14:40,550 DEBUG [M:0;jenkins-hbase4:36667] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-21 00:14:40,551 INFO [M:0;jenkins-hbase4:36667] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-21 00:14:40,551 INFO [M:0;jenkins-hbase4:36667] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-21 00:14:40,552 INFO [M:0;jenkins-hbase4:36667] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 00:14:40,552 DEBUG [M:0;jenkins-hbase4:36667] master.HMaster(1512): Stopping service threads 2023-07-21 00:14:40,552 INFO [M:0;jenkins-hbase4:36667] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-21 00:14:40,552 ERROR [M:0;jenkins-hbase4:36667] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-21 00:14:40,553 INFO [M:0;jenkins-hbase4:36667] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-21 00:14:40,553 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-21 00:14:40,553 DEBUG [M:0;jenkins-hbase4:36667] zookeeper.ZKUtil(398): master:36667-0x101853ae84c0000, quorum=127.0.0.1:63294, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-21 00:14:40,553 WARN [M:0;jenkins-hbase4:36667] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-21 00:14:40,553 INFO [M:0;jenkins-hbase4:36667] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-21 00:14:40,554 INFO [M:0;jenkins-hbase4:36667] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-21 00:14:40,554 DEBUG [M:0;jenkins-hbase4:36667] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-21 00:14:40,554 INFO [M:0;jenkins-hbase4:36667] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 00:14:40,554 DEBUG [M:0;jenkins-hbase4:36667] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 00:14:40,554 DEBUG [M:0;jenkins-hbase4:36667] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-21 00:14:40,554 DEBUG [M:0;jenkins-hbase4:36667] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 00:14:40,554 INFO [M:0;jenkins-hbase4:36667] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=92.99 KB heapSize=109.15 KB 2023-07-21 00:14:40,568 INFO [M:0;jenkins-hbase4:36667] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=92.99 KB at sequenceid=194 (bloomFilter=true), to=hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/c7a2b135604741e1930f08c58590f39f 2023-07-21 00:14:40,574 DEBUG [M:0;jenkins-hbase4:36667] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/c7a2b135604741e1930f08c58590f39f as hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/c7a2b135604741e1930f08c58590f39f 2023-07-21 00:14:40,579 INFO [M:0;jenkins-hbase4:36667] regionserver.HStore(1080): Added hdfs://localhost:42959/user/jenkins/test-data/806881d9-6038-fa48-0fcb-33a4156dfd6c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/c7a2b135604741e1930f08c58590f39f, entries=24, sequenceid=194, filesize=12.4 K 2023-07-21 00:14:40,579 INFO [M:0;jenkins-hbase4:36667] regionserver.HRegion(2948): Finished flush of dataSize ~92.99 KB/95226, heapSize ~109.13 KB/111752, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 25ms, sequenceid=194, compaction requested=false 2023-07-21 00:14:40,581 INFO [M:0;jenkins-hbase4:36667] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 00:14:40,581 DEBUG [M:0;jenkins-hbase4:36667] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 00:14:40,585 INFO [M:0;jenkins-hbase4:36667] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-21 00:14:40,585 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 00:14:40,586 INFO [M:0;jenkins-hbase4:36667] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:36667 2023-07-21 00:14:40,594 DEBUG [M:0;jenkins-hbase4:36667] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,36667,1689898476899 already deleted, retry=false 2023-07-21 00:14:40,783 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): master:36667-0x101853ae84c0000, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 00:14:40,783 INFO [M:0;jenkins-hbase4:36667] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,36667,1689898476899; zookeeper connection closed. 2023-07-21 00:14:40,783 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): master:36667-0x101853ae84c0000, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 00:14:40,883 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): regionserver:43031-0x101853ae84c0003, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 00:14:40,883 INFO [RS:2;jenkins-hbase4:43031] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,43031,1689898477515; zookeeper connection closed. 2023-07-21 00:14:40,883 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): regionserver:43031-0x101853ae84c0003, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 00:14:40,885 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@2d183f45] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@2d183f45 2023-07-21 00:14:40,983 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): regionserver:37637-0x101853ae84c0001, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 00:14:40,983 INFO [RS:0;jenkins-hbase4:37637] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37637,1689898477123; zookeeper connection closed. 2023-07-21 00:14:40,983 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): regionserver:37637-0x101853ae84c0001, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 00:14:40,984 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1dca598e] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1dca598e 2023-07-21 00:14:41,084 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): regionserver:38311-0x101853ae84c0002, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 00:14:41,084 INFO [RS:1;jenkins-hbase4:38311] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,38311,1689898477336; zookeeper connection closed. 2023-07-21 00:14:41,084 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): regionserver:38311-0x101853ae84c0002, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 00:14:41,084 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4b66c7bc] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4b66c7bc 2023-07-21 00:14:41,084 INFO [Listener at localhost/38819] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-21 00:14:41,085 WARN [Listener at localhost/38819] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 00:14:41,088 INFO [Listener at localhost/38819] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 00:14:41,194 WARN [BP-623951809-172.31.14.131-1689898475731 heartbeating to localhost/127.0.0.1:42959] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 00:14:41,194 WARN [BP-623951809-172.31.14.131-1689898475731 heartbeating to localhost/127.0.0.1:42959] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-623951809-172.31.14.131-1689898475731 (Datanode Uuid e6e9547b-0cfa-448f-8416-1a8371a86d57) service to localhost/127.0.0.1:42959 2023-07-21 00:14:41,195 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3eb20d03-240b-b752-ba9a-24012c7c801e/cluster_7e7efc1e-9bda-4757-7960-9be15d49d74a/dfs/data/data5/current/BP-623951809-172.31.14.131-1689898475731] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 00:14:41,195 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3eb20d03-240b-b752-ba9a-24012c7c801e/cluster_7e7efc1e-9bda-4757-7960-9be15d49d74a/dfs/data/data6/current/BP-623951809-172.31.14.131-1689898475731] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 00:14:41,197 WARN [Listener at localhost/38819] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 00:14:41,201 INFO [Listener at localhost/38819] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 00:14:41,305 WARN [BP-623951809-172.31.14.131-1689898475731 heartbeating to localhost/127.0.0.1:42959] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 00:14:41,305 WARN [BP-623951809-172.31.14.131-1689898475731 heartbeating to localhost/127.0.0.1:42959] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-623951809-172.31.14.131-1689898475731 (Datanode Uuid cbed6db4-32a8-4b39-b004-be82e94f4643) service to localhost/127.0.0.1:42959 2023-07-21 00:14:41,306 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3eb20d03-240b-b752-ba9a-24012c7c801e/cluster_7e7efc1e-9bda-4757-7960-9be15d49d74a/dfs/data/data3/current/BP-623951809-172.31.14.131-1689898475731] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 00:14:41,306 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3eb20d03-240b-b752-ba9a-24012c7c801e/cluster_7e7efc1e-9bda-4757-7960-9be15d49d74a/dfs/data/data4/current/BP-623951809-172.31.14.131-1689898475731] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 00:14:41,308 WARN [Listener at localhost/38819] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 00:14:41,312 INFO [Listener at localhost/38819] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 00:14:41,415 WARN [BP-623951809-172.31.14.131-1689898475731 heartbeating to localhost/127.0.0.1:42959] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 00:14:41,415 WARN [BP-623951809-172.31.14.131-1689898475731 heartbeating to localhost/127.0.0.1:42959] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-623951809-172.31.14.131-1689898475731 (Datanode Uuid 45e5e002-240b-4d3b-94c7-cf3a628b8284) service to localhost/127.0.0.1:42959 2023-07-21 00:14:41,416 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3eb20d03-240b-b752-ba9a-24012c7c801e/cluster_7e7efc1e-9bda-4757-7960-9be15d49d74a/dfs/data/data1/current/BP-623951809-172.31.14.131-1689898475731] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 00:14:41,416 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3eb20d03-240b-b752-ba9a-24012c7c801e/cluster_7e7efc1e-9bda-4757-7960-9be15d49d74a/dfs/data/data2/current/BP-623951809-172.31.14.131-1689898475731] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 00:14:41,425 INFO [Listener at localhost/38819] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 00:14:41,541 INFO [Listener at localhost/38819] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-21 00:14:41,573 INFO [Listener at localhost/38819] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-21 00:14:41,574 INFO [Listener at localhost/38819] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-21 00:14:41,574 INFO [Listener at localhost/38819] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3eb20d03-240b-b752-ba9a-24012c7c801e/hadoop.log.dir so I do NOT create it in target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198 2023-07-21 00:14:41,574 INFO [Listener at localhost/38819] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/3eb20d03-240b-b752-ba9a-24012c7c801e/hadoop.tmp.dir so I do NOT create it in target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198 2023-07-21 00:14:41,574 INFO [Listener at localhost/38819] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/cluster_5ed5401b-14a3-7f60-6a0c-e81bd32f3c30, deleteOnExit=true 2023-07-21 00:14:41,574 INFO [Listener at localhost/38819] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-21 00:14:41,574 INFO [Listener at localhost/38819] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/test.cache.data in system properties and HBase conf 2023-07-21 00:14:41,575 INFO [Listener at localhost/38819] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/hadoop.tmp.dir in system properties and HBase conf 2023-07-21 00:14:41,575 INFO [Listener at localhost/38819] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/hadoop.log.dir in system properties and HBase conf 2023-07-21 00:14:41,575 INFO [Listener at localhost/38819] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-21 00:14:41,575 INFO [Listener at localhost/38819] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-21 00:14:41,575 INFO [Listener at localhost/38819] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-21 00:14:41,576 DEBUG [Listener at localhost/38819] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-21 00:14:41,576 INFO [Listener at localhost/38819] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-21 00:14:41,576 INFO [Listener at localhost/38819] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-21 00:14:41,576 INFO [Listener at localhost/38819] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-21 00:14:41,577 INFO [Listener at localhost/38819] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-21 00:14:41,577 INFO [Listener at localhost/38819] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-21 00:14:41,577 INFO [Listener at localhost/38819] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-21 00:14:41,577 INFO [Listener at localhost/38819] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-21 00:14:41,577 INFO [Listener at localhost/38819] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-21 00:14:41,577 INFO [Listener at localhost/38819] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-21 00:14:41,578 INFO [Listener at localhost/38819] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/nfs.dump.dir in system properties and HBase conf 2023-07-21 00:14:41,578 INFO [Listener at localhost/38819] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/java.io.tmpdir in system properties and HBase conf 2023-07-21 00:14:41,578 INFO [Listener at localhost/38819] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-21 00:14:41,578 INFO [Listener at localhost/38819] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-21 00:14:41,578 INFO [Listener at localhost/38819] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-21 00:14:41,585 WARN [Listener at localhost/38819] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-21 00:14:41,585 WARN [Listener at localhost/38819] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-21 00:14:41,631 WARN [Listener at localhost/38819] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 00:14:41,633 INFO [Listener at localhost/38819] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 00:14:41,639 DEBUG [Listener at localhost/38819-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x101853ae84c000a, quorum=127.0.0.1:63294, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-21 00:14:41,639 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x101853ae84c000a, quorum=127.0.0.1:63294, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-21 00:14:41,639 INFO [Listener at localhost/38819] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/java.io.tmpdir/Jetty_localhost_34569_hdfs____.o0oe5b/webapp 2023-07-21 00:14:41,738 INFO [Listener at localhost/38819] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34569 2023-07-21 00:14:41,786 WARN [Listener at localhost/38819] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-21 00:14:41,787 WARN [Listener at localhost/38819] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-21 00:14:41,842 WARN [Listener at localhost/40339] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 00:14:41,856 WARN [Listener at localhost/40339] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 00:14:41,858 WARN [Listener at localhost/40339] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 00:14:41,859 INFO [Listener at localhost/40339] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 00:14:41,865 INFO [Listener at localhost/40339] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/java.io.tmpdir/Jetty_localhost_39919_datanode____g3f142/webapp 2023-07-21 00:14:41,967 INFO [Listener at localhost/40339] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39919 2023-07-21 00:14:41,974 WARN [Listener at localhost/42167] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 00:14:41,996 WARN [Listener at localhost/42167] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 00:14:41,998 WARN [Listener at localhost/42167] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 00:14:41,999 INFO [Listener at localhost/42167] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 00:14:42,003 INFO [Listener at localhost/42167] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/java.io.tmpdir/Jetty_localhost_38945_datanode____iv4i1o/webapp 2023-07-21 00:14:42,087 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x17b72076f54a2c10: Processing first storage report for DS-c92431fe-33f7-4b16-bb4f-ade70d333d1e from datanode 6a459a30-7fc6-436c-b2ca-e7fe04c1f1dc 2023-07-21 00:14:42,087 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x17b72076f54a2c10: from storage DS-c92431fe-33f7-4b16-bb4f-ade70d333d1e node DatanodeRegistration(127.0.0.1:46795, datanodeUuid=6a459a30-7fc6-436c-b2ca-e7fe04c1f1dc, infoPort=35573, infoSecurePort=0, ipcPort=42167, storageInfo=lv=-57;cid=testClusterID;nsid=1017270741;c=1689898481590), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-21 00:14:42,087 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x17b72076f54a2c10: Processing first storage report for DS-46e2801b-4e76-4111-b8fc-a09ccd880818 from datanode 6a459a30-7fc6-436c-b2ca-e7fe04c1f1dc 2023-07-21 00:14:42,088 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x17b72076f54a2c10: from storage DS-46e2801b-4e76-4111-b8fc-a09ccd880818 node DatanodeRegistration(127.0.0.1:46795, datanodeUuid=6a459a30-7fc6-436c-b2ca-e7fe04c1f1dc, infoPort=35573, infoSecurePort=0, ipcPort=42167, storageInfo=lv=-57;cid=testClusterID;nsid=1017270741;c=1689898481590), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 00:14:42,114 INFO [Listener at localhost/42167] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38945 2023-07-21 00:14:42,122 WARN [Listener at localhost/46267] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 00:14:42,137 WARN [Listener at localhost/46267] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 00:14:42,139 WARN [Listener at localhost/46267] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 00:14:42,140 INFO [Listener at localhost/46267] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 00:14:42,143 INFO [Listener at localhost/46267] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/java.io.tmpdir/Jetty_localhost_36749_datanode____.xit68w/webapp 2023-07-21 00:14:42,233 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa6fff29b80a806de: Processing first storage report for DS-39ad5284-7328-466c-9b07-31be4237cf46 from datanode 06c39073-d971-4162-910a-8bde4402365a 2023-07-21 00:14:42,233 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa6fff29b80a806de: from storage DS-39ad5284-7328-466c-9b07-31be4237cf46 node DatanodeRegistration(127.0.0.1:46599, datanodeUuid=06c39073-d971-4162-910a-8bde4402365a, infoPort=40873, infoSecurePort=0, ipcPort=46267, storageInfo=lv=-57;cid=testClusterID;nsid=1017270741;c=1689898481590), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 00:14:42,233 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa6fff29b80a806de: Processing first storage report for DS-7c42d306-4ae0-4745-9e4c-a845882b16b3 from datanode 06c39073-d971-4162-910a-8bde4402365a 2023-07-21 00:14:42,233 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa6fff29b80a806de: from storage DS-7c42d306-4ae0-4745-9e4c-a845882b16b3 node DatanodeRegistration(127.0.0.1:46599, datanodeUuid=06c39073-d971-4162-910a-8bde4402365a, infoPort=40873, infoSecurePort=0, ipcPort=46267, storageInfo=lv=-57;cid=testClusterID;nsid=1017270741;c=1689898481590), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 00:14:42,252 INFO [Listener at localhost/46267] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36749 2023-07-21 00:14:42,275 WARN [Listener at localhost/44727] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 00:14:42,400 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x85c2ef86db49116d: Processing first storage report for DS-249fb10b-85af-46b8-b83d-b18a49e4617b from datanode 18cc931e-84ea-4e0c-86e4-57340b8c0d1a 2023-07-21 00:14:42,400 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x85c2ef86db49116d: from storage DS-249fb10b-85af-46b8-b83d-b18a49e4617b node DatanodeRegistration(127.0.0.1:44315, datanodeUuid=18cc931e-84ea-4e0c-86e4-57340b8c0d1a, infoPort=34749, infoSecurePort=0, ipcPort=44727, storageInfo=lv=-57;cid=testClusterID;nsid=1017270741;c=1689898481590), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 00:14:42,400 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x85c2ef86db49116d: Processing first storage report for DS-ffba9c68-3493-4eea-9046-c15ad2b8dc1a from datanode 18cc931e-84ea-4e0c-86e4-57340b8c0d1a 2023-07-21 00:14:42,400 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x85c2ef86db49116d: from storage DS-ffba9c68-3493-4eea-9046-c15ad2b8dc1a node DatanodeRegistration(127.0.0.1:44315, datanodeUuid=18cc931e-84ea-4e0c-86e4-57340b8c0d1a, infoPort=34749, infoSecurePort=0, ipcPort=44727, storageInfo=lv=-57;cid=testClusterID;nsid=1017270741;c=1689898481590), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 00:14:42,488 DEBUG [Listener at localhost/44727] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198 2023-07-21 00:14:42,490 INFO [Listener at localhost/44727] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/cluster_5ed5401b-14a3-7f60-6a0c-e81bd32f3c30/zookeeper_0, clientPort=57003, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/cluster_5ed5401b-14a3-7f60-6a0c-e81bd32f3c30/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/cluster_5ed5401b-14a3-7f60-6a0c-e81bd32f3c30/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-21 00:14:42,491 INFO [Listener at localhost/44727] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=57003 2023-07-21 00:14:42,491 INFO [Listener at localhost/44727] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 00:14:42,492 INFO [Listener at localhost/44727] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 00:14:42,509 INFO [Listener at localhost/44727] util.FSUtils(471): Created version file at hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b with version=8 2023-07-21 00:14:42,510 INFO [Listener at localhost/44727] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:36751/user/jenkins/test-data/81865f24-cb6c-774c-bb58-2c07c4b2c336/hbase-staging 2023-07-21 00:14:42,511 DEBUG [Listener at localhost/44727] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-21 00:14:42,511 DEBUG [Listener at localhost/44727] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-21 00:14:42,511 DEBUG [Listener at localhost/44727] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-21 00:14:42,511 DEBUG [Listener at localhost/44727] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-21 00:14:42,512 INFO [Listener at localhost/44727] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-21 00:14:42,512 INFO [Listener at localhost/44727] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 00:14:42,512 INFO [Listener at localhost/44727] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 00:14:42,512 INFO [Listener at localhost/44727] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 00:14:42,512 INFO [Listener at localhost/44727] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 00:14:42,512 INFO [Listener at localhost/44727] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 00:14:42,512 INFO [Listener at localhost/44727] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 00:14:42,513 INFO [Listener at localhost/44727] ipc.NettyRpcServer(120): Bind to /172.31.14.131:39747 2023-07-21 00:14:42,513 INFO [Listener at localhost/44727] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 00:14:42,514 INFO [Listener at localhost/44727] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 00:14:42,515 INFO [Listener at localhost/44727] zookeeper.RecoverableZooKeeper(93): Process identifier=master:39747 connecting to ZooKeeper ensemble=127.0.0.1:57003 2023-07-21 00:14:42,523 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): master:397470x0, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 00:14:42,523 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:39747-0x101853afe3c0000 connected 2023-07-21 00:14:42,540 DEBUG [Listener at localhost/44727] zookeeper.ZKUtil(164): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 00:14:42,540 DEBUG [Listener at localhost/44727] zookeeper.ZKUtil(164): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 00:14:42,541 DEBUG [Listener at localhost/44727] zookeeper.ZKUtil(164): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 00:14:42,543 DEBUG [Listener at localhost/44727] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39747 2023-07-21 00:14:42,544 DEBUG [Listener at localhost/44727] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39747 2023-07-21 00:14:42,544 DEBUG [Listener at localhost/44727] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39747 2023-07-21 00:14:42,545 DEBUG [Listener at localhost/44727] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39747 2023-07-21 00:14:42,545 DEBUG [Listener at localhost/44727] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39747 2023-07-21 00:14:42,547 INFO [Listener at localhost/44727] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 00:14:42,547 INFO [Listener at localhost/44727] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 00:14:42,547 INFO [Listener at localhost/44727] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 00:14:42,547 INFO [Listener at localhost/44727] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-21 00:14:42,547 INFO [Listener at localhost/44727] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 00:14:42,547 INFO [Listener at localhost/44727] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 00:14:42,548 INFO [Listener at localhost/44727] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 00:14:42,548 INFO [Listener at localhost/44727] http.HttpServer(1146): Jetty bound to port 43831 2023-07-21 00:14:42,548 INFO [Listener at localhost/44727] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 00:14:42,554 INFO [Listener at localhost/44727] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 00:14:42,554 INFO [Listener at localhost/44727] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6483b7a2{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/hadoop.log.dir/,AVAILABLE} 2023-07-21 00:14:42,554 INFO [Listener at localhost/44727] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 00:14:42,554 INFO [Listener at localhost/44727] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@746ec252{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 00:14:42,673 INFO [Listener at localhost/44727] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 00:14:42,675 INFO [Listener at localhost/44727] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 00:14:42,675 INFO [Listener at localhost/44727] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 00:14:42,675 INFO [Listener at localhost/44727] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 00:14:42,678 INFO [Listener at localhost/44727] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 00:14:42,679 INFO [Listener at localhost/44727] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@5d66f18d{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/java.io.tmpdir/jetty-0_0_0_0-43831-hbase-server-2_4_18-SNAPSHOT_jar-_-any-462409707521164495/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-21 00:14:42,680 INFO [Listener at localhost/44727] server.AbstractConnector(333): Started ServerConnector@44888ac3{HTTP/1.1, (http/1.1)}{0.0.0.0:43831} 2023-07-21 00:14:42,681 INFO [Listener at localhost/44727] server.Server(415): Started @40492ms 2023-07-21 00:14:42,681 INFO [Listener at localhost/44727] master.HMaster(444): hbase.rootdir=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b, hbase.cluster.distributed=false 2023-07-21 00:14:42,701 INFO [Listener at localhost/44727] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-21 00:14:42,701 INFO [Listener at localhost/44727] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 00:14:42,702 INFO [Listener at localhost/44727] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 00:14:42,702 INFO [Listener at localhost/44727] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 00:14:42,702 INFO [Listener at localhost/44727] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 00:14:42,702 INFO [Listener at localhost/44727] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 00:14:42,702 INFO [Listener at localhost/44727] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 00:14:42,707 INFO [Listener at localhost/44727] ipc.NettyRpcServer(120): Bind to /172.31.14.131:36361 2023-07-21 00:14:42,707 INFO [Listener at localhost/44727] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 00:14:42,708 DEBUG [Listener at localhost/44727] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 00:14:42,709 INFO [Listener at localhost/44727] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 00:14:42,710 INFO [Listener at localhost/44727] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 00:14:42,711 INFO [Listener at localhost/44727] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:36361 connecting to ZooKeeper ensemble=127.0.0.1:57003 2023-07-21 00:14:42,717 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): regionserver:363610x0, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 00:14:42,718 DEBUG [Listener at localhost/44727] zookeeper.ZKUtil(164): regionserver:363610x0, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 00:14:42,719 DEBUG [Listener at localhost/44727] zookeeper.ZKUtil(164): regionserver:363610x0, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 00:14:42,719 DEBUG [Listener at localhost/44727] zookeeper.ZKUtil(164): regionserver:363610x0, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 00:14:42,723 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:36361-0x101853afe3c0001 connected 2023-07-21 00:14:42,723 DEBUG [Listener at localhost/44727] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36361 2023-07-21 00:14:42,723 DEBUG [Listener at localhost/44727] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36361 2023-07-21 00:14:42,724 DEBUG [Listener at localhost/44727] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36361 2023-07-21 00:14:42,724 DEBUG [Listener at localhost/44727] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36361 2023-07-21 00:14:42,724 DEBUG [Listener at localhost/44727] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36361 2023-07-21 00:14:42,726 INFO [Listener at localhost/44727] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 00:14:42,726 INFO [Listener at localhost/44727] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 00:14:42,726 INFO [Listener at localhost/44727] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 00:14:42,727 INFO [Listener at localhost/44727] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 00:14:42,727 INFO [Listener at localhost/44727] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 00:14:42,727 INFO [Listener at localhost/44727] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 00:14:42,727 INFO [Listener at localhost/44727] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 00:14:42,728 INFO [Listener at localhost/44727] http.HttpServer(1146): Jetty bound to port 40557 2023-07-21 00:14:42,728 INFO [Listener at localhost/44727] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 00:14:42,739 INFO [Listener at localhost/44727] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 00:14:42,739 INFO [Listener at localhost/44727] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6e84d38b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/hadoop.log.dir/,AVAILABLE} 2023-07-21 00:14:42,739 INFO [Listener at localhost/44727] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 00:14:42,740 INFO [Listener at localhost/44727] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4cf17b35{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 00:14:42,853 INFO [Listener at localhost/44727] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 00:14:42,854 INFO [Listener at localhost/44727] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 00:14:42,854 INFO [Listener at localhost/44727] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 00:14:42,854 INFO [Listener at localhost/44727] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 00:14:42,855 INFO [Listener at localhost/44727] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 00:14:42,856 INFO [Listener at localhost/44727] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@8d4a78e{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/java.io.tmpdir/jetty-0_0_0_0-40557-hbase-server-2_4_18-SNAPSHOT_jar-_-any-1968219322955391493/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 00:14:42,857 INFO [Listener at localhost/44727] server.AbstractConnector(333): Started ServerConnector@521a0479{HTTP/1.1, (http/1.1)}{0.0.0.0:40557} 2023-07-21 00:14:42,857 INFO [Listener at localhost/44727] server.Server(415): Started @40669ms 2023-07-21 00:14:42,870 INFO [Listener at localhost/44727] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-21 00:14:42,871 INFO [Listener at localhost/44727] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 00:14:42,871 INFO [Listener at localhost/44727] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 00:14:42,871 INFO [Listener at localhost/44727] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 00:14:42,871 INFO [Listener at localhost/44727] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 00:14:42,871 INFO [Listener at localhost/44727] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 00:14:42,871 INFO [Listener at localhost/44727] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 00:14:42,872 INFO [Listener at localhost/44727] ipc.NettyRpcServer(120): Bind to /172.31.14.131:36937 2023-07-21 00:14:42,872 INFO [Listener at localhost/44727] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 00:14:42,873 DEBUG [Listener at localhost/44727] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 00:14:42,874 INFO [Listener at localhost/44727] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 00:14:42,875 INFO [Listener at localhost/44727] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 00:14:42,875 INFO [Listener at localhost/44727] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:36937 connecting to ZooKeeper ensemble=127.0.0.1:57003 2023-07-21 00:14:42,879 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): regionserver:369370x0, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 00:14:42,880 DEBUG [Listener at localhost/44727] zookeeper.ZKUtil(164): regionserver:369370x0, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 00:14:42,880 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:36937-0x101853afe3c0002 connected 2023-07-21 00:14:42,881 DEBUG [Listener at localhost/44727] zookeeper.ZKUtil(164): regionserver:36937-0x101853afe3c0002, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 00:14:42,881 DEBUG [Listener at localhost/44727] zookeeper.ZKUtil(164): regionserver:36937-0x101853afe3c0002, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 00:14:42,881 DEBUG [Listener at localhost/44727] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36937 2023-07-21 00:14:42,882 DEBUG [Listener at localhost/44727] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36937 2023-07-21 00:14:42,883 DEBUG [Listener at localhost/44727] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36937 2023-07-21 00:14:42,883 DEBUG [Listener at localhost/44727] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36937 2023-07-21 00:14:42,883 DEBUG [Listener at localhost/44727] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36937 2023-07-21 00:14:42,885 INFO [Listener at localhost/44727] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 00:14:42,885 INFO [Listener at localhost/44727] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 00:14:42,885 INFO [Listener at localhost/44727] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 00:14:42,885 INFO [Listener at localhost/44727] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 00:14:42,886 INFO [Listener at localhost/44727] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 00:14:42,886 INFO [Listener at localhost/44727] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 00:14:42,886 INFO [Listener at localhost/44727] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 00:14:42,886 INFO [Listener at localhost/44727] http.HttpServer(1146): Jetty bound to port 40861 2023-07-21 00:14:42,886 INFO [Listener at localhost/44727] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 00:14:42,889 INFO [Listener at localhost/44727] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 00:14:42,890 INFO [Listener at localhost/44727] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5b843864{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/hadoop.log.dir/,AVAILABLE} 2023-07-21 00:14:42,890 INFO [Listener at localhost/44727] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 00:14:42,890 INFO [Listener at localhost/44727] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6a39ee87{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 00:14:43,014 INFO [Listener at localhost/44727] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 00:14:43,015 INFO [Listener at localhost/44727] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 00:14:43,015 INFO [Listener at localhost/44727] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 00:14:43,016 INFO [Listener at localhost/44727] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 00:14:43,017 INFO [Listener at localhost/44727] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 00:14:43,018 INFO [Listener at localhost/44727] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@5dd8db29{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/java.io.tmpdir/jetty-0_0_0_0-40861-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5675366409123890129/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 00:14:43,020 INFO [Listener at localhost/44727] server.AbstractConnector(333): Started ServerConnector@db0735{HTTP/1.1, (http/1.1)}{0.0.0.0:40861} 2023-07-21 00:14:43,020 INFO [Listener at localhost/44727] server.Server(415): Started @40832ms 2023-07-21 00:14:43,033 INFO [Listener at localhost/44727] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-21 00:14:43,033 INFO [Listener at localhost/44727] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 00:14:43,034 INFO [Listener at localhost/44727] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 00:14:43,034 INFO [Listener at localhost/44727] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 00:14:43,034 INFO [Listener at localhost/44727] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 00:14:43,034 INFO [Listener at localhost/44727] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 00:14:43,034 INFO [Listener at localhost/44727] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 00:14:43,035 INFO [Listener at localhost/44727] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35077 2023-07-21 00:14:43,035 INFO [Listener at localhost/44727] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 00:14:43,036 DEBUG [Listener at localhost/44727] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 00:14:43,037 INFO [Listener at localhost/44727] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 00:14:43,038 INFO [Listener at localhost/44727] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 00:14:43,038 INFO [Listener at localhost/44727] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35077 connecting to ZooKeeper ensemble=127.0.0.1:57003 2023-07-21 00:14:43,042 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): regionserver:350770x0, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 00:14:43,043 DEBUG [Listener at localhost/44727] zookeeper.ZKUtil(164): regionserver:350770x0, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 00:14:43,043 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35077-0x101853afe3c0003 connected 2023-07-21 00:14:43,044 DEBUG [Listener at localhost/44727] zookeeper.ZKUtil(164): regionserver:35077-0x101853afe3c0003, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 00:14:43,044 DEBUG [Listener at localhost/44727] zookeeper.ZKUtil(164): regionserver:35077-0x101853afe3c0003, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 00:14:43,047 DEBUG [Listener at localhost/44727] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35077 2023-07-21 00:14:43,047 DEBUG [Listener at localhost/44727] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35077 2023-07-21 00:14:43,047 DEBUG [Listener at localhost/44727] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35077 2023-07-21 00:14:43,048 DEBUG [Listener at localhost/44727] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35077 2023-07-21 00:14:43,048 DEBUG [Listener at localhost/44727] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35077 2023-07-21 00:14:43,050 INFO [Listener at localhost/44727] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 00:14:43,050 INFO [Listener at localhost/44727] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 00:14:43,050 INFO [Listener at localhost/44727] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 00:14:43,050 INFO [Listener at localhost/44727] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 00:14:43,050 INFO [Listener at localhost/44727] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 00:14:43,050 INFO [Listener at localhost/44727] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 00:14:43,051 INFO [Listener at localhost/44727] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 00:14:43,051 INFO [Listener at localhost/44727] http.HttpServer(1146): Jetty bound to port 37945 2023-07-21 00:14:43,051 INFO [Listener at localhost/44727] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 00:14:43,055 INFO [Listener at localhost/44727] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 00:14:43,055 INFO [Listener at localhost/44727] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3bddfc7b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/hadoop.log.dir/,AVAILABLE} 2023-07-21 00:14:43,055 INFO [Listener at localhost/44727] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 00:14:43,056 INFO [Listener at localhost/44727] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@59d396e9{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 00:14:43,179 INFO [Listener at localhost/44727] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 00:14:43,180 INFO [Listener at localhost/44727] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 00:14:43,180 INFO [Listener at localhost/44727] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 00:14:43,180 INFO [Listener at localhost/44727] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 00:14:43,181 INFO [Listener at localhost/44727] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 00:14:43,182 INFO [Listener at localhost/44727] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2a25343d{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/java.io.tmpdir/jetty-0_0_0_0-37945-hbase-server-2_4_18-SNAPSHOT_jar-_-any-1314510807360924256/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 00:14:43,183 INFO [Listener at localhost/44727] server.AbstractConnector(333): Started ServerConnector@77e9a657{HTTP/1.1, (http/1.1)}{0.0.0.0:37945} 2023-07-21 00:14:43,184 INFO [Listener at localhost/44727] server.Server(415): Started @40995ms 2023-07-21 00:14:43,185 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 00:14:43,188 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@346e13ed{HTTP/1.1, (http/1.1)}{0.0.0.0:37413} 2023-07-21 00:14:43,188 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @41000ms 2023-07-21 00:14:43,188 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,39747,1689898482511 2023-07-21 00:14:43,191 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-21 00:14:43,191 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,39747,1689898482511 2023-07-21 00:14:43,193 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): regionserver:36361-0x101853afe3c0001, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 00:14:43,193 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): regionserver:36937-0x101853afe3c0002, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 00:14:43,193 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 00:14:43,193 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 00:14:43,193 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): regionserver:35077-0x101853afe3c0003, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 00:14:43,194 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 00:14:43,197 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,39747,1689898482511 from backup master directory 2023-07-21 00:14:43,197 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 00:14:43,198 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,39747,1689898482511 2023-07-21 00:14:43,198 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-21 00:14:43,198 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 00:14:43,198 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,39747,1689898482511 2023-07-21 00:14:43,216 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/hbase.id with ID: 5d6825f5-0ec3-416c-9136-e6638abbb2b4 2023-07-21 00:14:43,224 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 00:14:43,226 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 00:14:43,234 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x75cd413a to 127.0.0.1:57003 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 00:14:43,239 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@214cfcbb, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 00:14:43,239 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 00:14:43,240 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-21 00:14:43,240 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 00:14:43,241 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/MasterData/data/master/store-tmp 2023-07-21 00:14:43,249 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:43,249 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-21 00:14:43,249 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 00:14:43,250 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 00:14:43,250 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-21 00:14:43,250 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 00:14:43,250 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 00:14:43,250 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 00:14:43,250 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/MasterData/WALs/jenkins-hbase4.apache.org,39747,1689898482511 2023-07-21 00:14:43,253 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39747%2C1689898482511, suffix=, logDir=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/MasterData/WALs/jenkins-hbase4.apache.org,39747,1689898482511, archiveDir=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/MasterData/oldWALs, maxLogs=10 2023-07-21 00:14:43,268 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46795,DS-c92431fe-33f7-4b16-bb4f-ade70d333d1e,DISK] 2023-07-21 00:14:43,268 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46599,DS-39ad5284-7328-466c-9b07-31be4237cf46,DISK] 2023-07-21 00:14:43,268 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44315,DS-249fb10b-85af-46b8-b83d-b18a49e4617b,DISK] 2023-07-21 00:14:43,270 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/MasterData/WALs/jenkins-hbase4.apache.org,39747,1689898482511/jenkins-hbase4.apache.org%2C39747%2C1689898482511.1689898483253 2023-07-21 00:14:43,270 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46795,DS-c92431fe-33f7-4b16-bb4f-ade70d333d1e,DISK], DatanodeInfoWithStorage[127.0.0.1:44315,DS-249fb10b-85af-46b8-b83d-b18a49e4617b,DISK], DatanodeInfoWithStorage[127.0.0.1:46599,DS-39ad5284-7328-466c-9b07-31be4237cf46,DISK]] 2023-07-21 00:14:43,270 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-21 00:14:43,270 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:43,270 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 00:14:43,270 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 00:14:43,271 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-21 00:14:43,273 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-21 00:14:43,273 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-21 00:14:43,274 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:43,275 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-21 00:14:43,275 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-21 00:14:43,277 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 00:14:43,279 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 00:14:43,279 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10990544000, jitterRate=0.02357417345046997}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 00:14:43,279 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 00:14:43,280 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-21 00:14:43,281 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-21 00:14:43,281 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-21 00:14:43,281 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-21 00:14:43,281 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-21 00:14:43,282 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-21 00:14:43,282 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-21 00:14:43,282 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-21 00:14:43,283 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-21 00:14:43,284 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-21 00:14:43,284 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-21 00:14:43,284 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-21 00:14:43,286 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 00:14:43,286 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-21 00:14:43,287 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-21 00:14:43,287 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-21 00:14:43,288 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 00:14:43,288 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): regionserver:36937-0x101853afe3c0002, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 00:14:43,288 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): regionserver:36361-0x101853afe3c0001, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 00:14:43,289 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): regionserver:35077-0x101853afe3c0003, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 00:14:43,289 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 00:14:43,291 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,39747,1689898482511, sessionid=0x101853afe3c0000, setting cluster-up flag (Was=false) 2023-07-21 00:14:43,295 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 00:14:43,298 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-21 00:14:43,299 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,39747,1689898482511 2023-07-21 00:14:43,303 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 00:14:43,307 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-21 00:14:43,307 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,39747,1689898482511 2023-07-21 00:14:43,308 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/.hbase-snapshot/.tmp 2023-07-21 00:14:43,309 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-21 00:14:43,309 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-21 00:14:43,309 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-21 00:14:43,310 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39747,1689898482511] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 00:14:43,310 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-21 00:14:43,311 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-21 00:14:43,326 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-21 00:14:43,326 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-21 00:14:43,326 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-21 00:14:43,326 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-21 00:14:43,326 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-21 00:14:43,326 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-21 00:14:43,326 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-21 00:14:43,326 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-21 00:14:43,326 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-21 00:14:43,326 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:43,326 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-21 00:14:43,326 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:43,328 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689898513328 2023-07-21 00:14:43,328 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-21 00:14:43,328 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-21 00:14:43,328 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-21 00:14:43,328 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-21 00:14:43,328 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-21 00:14:43,328 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-21 00:14:43,328 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:43,328 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-21 00:14:43,328 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-21 00:14:43,329 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-21 00:14:43,329 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-21 00:14:43,329 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-21 00:14:43,330 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-21 00:14:43,330 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-21 00:14:43,330 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-21 00:14:43,330 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689898483330,5,FailOnTimeoutGroup] 2023-07-21 00:14:43,334 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689898483330,5,FailOnTimeoutGroup] 2023-07-21 00:14:43,334 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:43,334 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-21 00:14:43,335 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:43,335 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:43,345 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-21 00:14:43,345 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-21 00:14:43,346 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b 2023-07-21 00:14:43,357 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:43,358 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 00:14:43,360 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/hbase/meta/1588230740/info 2023-07-21 00:14:43,360 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 00:14:43,361 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:43,361 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 00:14:43,362 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/hbase/meta/1588230740/rep_barrier 2023-07-21 00:14:43,363 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 00:14:43,363 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:43,363 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 00:14:43,365 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/hbase/meta/1588230740/table 2023-07-21 00:14:43,365 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 00:14:43,365 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:43,366 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/hbase/meta/1588230740 2023-07-21 00:14:43,367 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/hbase/meta/1588230740 2023-07-21 00:14:43,369 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 00:14:43,370 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 00:14:43,372 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 00:14:43,372 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10615357920, jitterRate=-0.011367753148078918}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 00:14:43,372 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 00:14:43,372 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 00:14:43,372 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 00:14:43,372 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 00:14:43,372 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 00:14:43,372 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 00:14:43,373 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 00:14:43,373 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 00:14:43,374 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-21 00:14:43,374 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-21 00:14:43,374 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-21 00:14:43,374 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-21 00:14:43,376 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-21 00:14:43,386 INFO [RS:1;jenkins-hbase4:36937] regionserver.HRegionServer(951): ClusterId : 5d6825f5-0ec3-416c-9136-e6638abbb2b4 2023-07-21 00:14:43,386 INFO [RS:0;jenkins-hbase4:36361] regionserver.HRegionServer(951): ClusterId : 5d6825f5-0ec3-416c-9136-e6638abbb2b4 2023-07-21 00:14:43,386 DEBUG [RS:1;jenkins-hbase4:36937] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 00:14:43,386 INFO [RS:2;jenkins-hbase4:35077] regionserver.HRegionServer(951): ClusterId : 5d6825f5-0ec3-416c-9136-e6638abbb2b4 2023-07-21 00:14:43,386 DEBUG [RS:0;jenkins-hbase4:36361] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 00:14:43,386 DEBUG [RS:2;jenkins-hbase4:35077] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 00:14:43,388 DEBUG [RS:1;jenkins-hbase4:36937] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 00:14:43,388 DEBUG [RS:1;jenkins-hbase4:36937] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 00:14:43,389 DEBUG [RS:0;jenkins-hbase4:36361] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 00:14:43,389 DEBUG [RS:0;jenkins-hbase4:36361] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 00:14:43,389 DEBUG [RS:2;jenkins-hbase4:35077] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 00:14:43,389 DEBUG [RS:2;jenkins-hbase4:35077] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 00:14:43,394 DEBUG [RS:1;jenkins-hbase4:36937] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 00:14:43,396 DEBUG [RS:2;jenkins-hbase4:35077] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 00:14:43,396 DEBUG [RS:1;jenkins-hbase4:36937] zookeeper.ReadOnlyZKClient(139): Connect 0x633d50a3 to 127.0.0.1:57003 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 00:14:43,396 DEBUG [RS:0;jenkins-hbase4:36361] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 00:14:43,400 DEBUG [RS:0;jenkins-hbase4:36361] zookeeper.ReadOnlyZKClient(139): Connect 0x7b1356a1 to 127.0.0.1:57003 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 00:14:43,400 DEBUG [RS:2;jenkins-hbase4:35077] zookeeper.ReadOnlyZKClient(139): Connect 0x0b6bf005 to 127.0.0.1:57003 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 00:14:43,407 DEBUG [RS:1;jenkins-hbase4:36937] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6dfab2b4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 00:14:43,407 DEBUG [RS:1;jenkins-hbase4:36937] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@69c53bd3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-21 00:14:43,412 DEBUG [RS:0;jenkins-hbase4:36361] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2a09fa8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 00:14:43,412 DEBUG [RS:2;jenkins-hbase4:35077] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4750fa58, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 00:14:43,412 DEBUG [RS:0;jenkins-hbase4:36361] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4aa2c9ea, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-21 00:14:43,413 DEBUG [RS:2;jenkins-hbase4:35077] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3aaa7f86, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-21 00:14:43,421 DEBUG [RS:1;jenkins-hbase4:36937] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:36937 2023-07-21 00:14:43,421 DEBUG [RS:2;jenkins-hbase4:35077] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:35077 2023-07-21 00:14:43,421 INFO [RS:1;jenkins-hbase4:36937] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 00:14:43,421 INFO [RS:2;jenkins-hbase4:35077] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 00:14:43,421 INFO [RS:2;jenkins-hbase4:35077] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 00:14:43,421 INFO [RS:1;jenkins-hbase4:36937] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 00:14:43,421 DEBUG [RS:1;jenkins-hbase4:36937] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 00:14:43,421 DEBUG [RS:2;jenkins-hbase4:35077] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 00:14:43,422 INFO [RS:2;jenkins-hbase4:35077] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,39747,1689898482511 with isa=jenkins-hbase4.apache.org/172.31.14.131:35077, startcode=1689898483033 2023-07-21 00:14:43,422 INFO [RS:1;jenkins-hbase4:36937] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,39747,1689898482511 with isa=jenkins-hbase4.apache.org/172.31.14.131:36937, startcode=1689898482870 2023-07-21 00:14:43,422 DEBUG [RS:2;jenkins-hbase4:35077] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 00:14:43,422 DEBUG [RS:1;jenkins-hbase4:36937] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 00:14:43,424 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41339, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 00:14:43,424 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57735, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 00:14:43,426 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39747] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,35077,1689898483033 2023-07-21 00:14:43,426 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39747,1689898482511] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 00:14:43,426 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39747,1689898482511] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-21 00:14:43,426 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39747] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,36937,1689898482870 2023-07-21 00:14:43,427 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39747,1689898482511] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 00:14:43,427 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39747,1689898482511] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-21 00:14:43,427 DEBUG [RS:2;jenkins-hbase4:35077] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b 2023-07-21 00:14:43,427 DEBUG [RS:2;jenkins-hbase4:35077] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40339 2023-07-21 00:14:43,427 DEBUG [RS:2;jenkins-hbase4:35077] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43831 2023-07-21 00:14:43,427 DEBUG [RS:0;jenkins-hbase4:36361] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:36361 2023-07-21 00:14:43,427 DEBUG [RS:1;jenkins-hbase4:36937] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b 2023-07-21 00:14:43,427 DEBUG [RS:1;jenkins-hbase4:36937] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40339 2023-07-21 00:14:43,427 INFO [RS:0;jenkins-hbase4:36361] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 00:14:43,427 INFO [RS:0;jenkins-hbase4:36361] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 00:14:43,427 DEBUG [RS:1;jenkins-hbase4:36937] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43831 2023-07-21 00:14:43,427 DEBUG [RS:0;jenkins-hbase4:36361] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 00:14:43,428 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 00:14:43,437 DEBUG [RS:2;jenkins-hbase4:35077] zookeeper.ZKUtil(162): regionserver:35077-0x101853afe3c0003, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35077,1689898483033 2023-07-21 00:14:43,437 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,35077,1689898483033] 2023-07-21 00:14:43,437 WARN [RS:2;jenkins-hbase4:35077] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 00:14:43,437 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,36937,1689898482870] 2023-07-21 00:14:43,437 INFO [RS:2;jenkins-hbase4:35077] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 00:14:43,437 INFO [RS:0;jenkins-hbase4:36361] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,39747,1689898482511 with isa=jenkins-hbase4.apache.org/172.31.14.131:36361, startcode=1689898482701 2023-07-21 00:14:43,437 DEBUG [RS:2;jenkins-hbase4:35077] regionserver.HRegionServer(1948): logDir=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/WALs/jenkins-hbase4.apache.org,35077,1689898483033 2023-07-21 00:14:43,437 DEBUG [RS:1;jenkins-hbase4:36937] zookeeper.ZKUtil(162): regionserver:36937-0x101853afe3c0002, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36937,1689898482870 2023-07-21 00:14:43,437 DEBUG [RS:0;jenkins-hbase4:36361] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 00:14:43,437 WARN [RS:1;jenkins-hbase4:36937] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 00:14:43,438 INFO [RS:1;jenkins-hbase4:36937] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 00:14:43,438 DEBUG [RS:1;jenkins-hbase4:36937] regionserver.HRegionServer(1948): logDir=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/WALs/jenkins-hbase4.apache.org,36937,1689898482870 2023-07-21 00:14:43,442 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40151, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 00:14:43,443 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39747] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,36361,1689898482701 2023-07-21 00:14:43,443 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39747,1689898482511] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 00:14:43,443 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39747,1689898482511] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-21 00:14:43,444 DEBUG [RS:0;jenkins-hbase4:36361] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b 2023-07-21 00:14:43,444 DEBUG [RS:0;jenkins-hbase4:36361] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40339 2023-07-21 00:14:43,444 DEBUG [RS:0;jenkins-hbase4:36361] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43831 2023-07-21 00:14:43,445 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 00:14:43,445 DEBUG [RS:2;jenkins-hbase4:35077] zookeeper.ZKUtil(162): regionserver:35077-0x101853afe3c0003, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35077,1689898483033 2023-07-21 00:14:43,445 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): regionserver:35077-0x101853afe3c0003, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 00:14:43,446 DEBUG [RS:1;jenkins-hbase4:36937] zookeeper.ZKUtil(162): regionserver:36937-0x101853afe3c0002, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35077,1689898483033 2023-07-21 00:14:43,446 DEBUG [RS:0;jenkins-hbase4:36361] zookeeper.ZKUtil(162): regionserver:36361-0x101853afe3c0001, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36361,1689898482701 2023-07-21 00:14:43,446 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,36361,1689898482701] 2023-07-21 00:14:43,446 WARN [RS:0;jenkins-hbase4:36361] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 00:14:43,446 INFO [RS:0;jenkins-hbase4:36361] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 00:14:43,446 DEBUG [RS:2;jenkins-hbase4:35077] zookeeper.ZKUtil(162): regionserver:35077-0x101853afe3c0003, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36937,1689898482870 2023-07-21 00:14:43,446 DEBUG [RS:0;jenkins-hbase4:36361] regionserver.HRegionServer(1948): logDir=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/WALs/jenkins-hbase4.apache.org,36361,1689898482701 2023-07-21 00:14:43,446 DEBUG [RS:1;jenkins-hbase4:36937] zookeeper.ZKUtil(162): regionserver:36937-0x101853afe3c0002, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36937,1689898482870 2023-07-21 00:14:43,447 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35077-0x101853afe3c0003, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35077,1689898483033 2023-07-21 00:14:43,447 DEBUG [RS:1;jenkins-hbase4:36937] zookeeper.ZKUtil(162): regionserver:36937-0x101853afe3c0002, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36361,1689898482701 2023-07-21 00:14:43,447 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35077-0x101853afe3c0003, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36937,1689898482870 2023-07-21 00:14:43,447 DEBUG [RS:2;jenkins-hbase4:35077] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 00:14:43,448 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35077-0x101853afe3c0003, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36361,1689898482701 2023-07-21 00:14:43,448 INFO [RS:2;jenkins-hbase4:35077] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 00:14:43,450 DEBUG [RS:1;jenkins-hbase4:36937] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 00:14:43,450 INFO [RS:1;jenkins-hbase4:36937] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 00:14:43,451 DEBUG [RS:0;jenkins-hbase4:36361] zookeeper.ZKUtil(162): regionserver:36361-0x101853afe3c0001, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35077,1689898483033 2023-07-21 00:14:43,451 DEBUG [RS:0;jenkins-hbase4:36361] zookeeper.ZKUtil(162): regionserver:36361-0x101853afe3c0001, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36937,1689898482870 2023-07-21 00:14:43,451 DEBUG [RS:0;jenkins-hbase4:36361] zookeeper.ZKUtil(162): regionserver:36361-0x101853afe3c0001, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36361,1689898482701 2023-07-21 00:14:43,452 DEBUG [RS:0;jenkins-hbase4:36361] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 00:14:43,452 INFO [RS:0;jenkins-hbase4:36361] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 00:14:43,453 INFO [RS:2;jenkins-hbase4:35077] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 00:14:43,454 INFO [RS:1;jenkins-hbase4:36937] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 00:14:43,455 INFO [RS:0;jenkins-hbase4:36361] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 00:14:43,458 INFO [RS:2;jenkins-hbase4:35077] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 00:14:43,458 INFO [RS:0;jenkins-hbase4:36361] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 00:14:43,458 INFO [RS:2;jenkins-hbase4:35077] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:43,458 INFO [RS:0;jenkins-hbase4:36361] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:43,459 INFO [RS:1;jenkins-hbase4:36937] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 00:14:43,459 INFO [RS:0;jenkins-hbase4:36361] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 00:14:43,459 INFO [RS:1;jenkins-hbase4:36937] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:43,459 INFO [RS:2;jenkins-hbase4:35077] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 00:14:43,460 INFO [RS:1;jenkins-hbase4:36937] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 00:14:43,461 INFO [RS:2;jenkins-hbase4:35077] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:43,461 INFO [RS:0;jenkins-hbase4:36361] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:43,462 DEBUG [RS:2;jenkins-hbase4:35077] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:43,462 DEBUG [RS:2;jenkins-hbase4:35077] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:43,462 DEBUG [RS:0;jenkins-hbase4:36361] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:43,462 DEBUG [RS:2;jenkins-hbase4:35077] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:43,462 DEBUG [RS:0;jenkins-hbase4:36361] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:43,462 DEBUG [RS:2;jenkins-hbase4:35077] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:43,462 DEBUG [RS:0;jenkins-hbase4:36361] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:43,462 DEBUG [RS:2;jenkins-hbase4:35077] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:43,462 DEBUG [RS:0;jenkins-hbase4:36361] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:43,462 DEBUG [RS:2;jenkins-hbase4:35077] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-21 00:14:43,462 DEBUG [RS:0;jenkins-hbase4:36361] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:43,462 DEBUG [RS:2;jenkins-hbase4:35077] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:43,462 DEBUG [RS:0;jenkins-hbase4:36361] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-21 00:14:43,462 DEBUG [RS:2;jenkins-hbase4:35077] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:43,462 DEBUG [RS:0;jenkins-hbase4:36361] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:43,462 DEBUG [RS:2;jenkins-hbase4:35077] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:43,462 DEBUG [RS:0;jenkins-hbase4:36361] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:43,462 DEBUG [RS:2;jenkins-hbase4:35077] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:43,462 DEBUG [RS:0;jenkins-hbase4:36361] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:43,462 DEBUG [RS:0;jenkins-hbase4:36361] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:43,466 INFO [RS:1;jenkins-hbase4:36937] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:43,469 DEBUG [RS:1;jenkins-hbase4:36937] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:43,469 INFO [RS:2;jenkins-hbase4:35077] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:43,469 DEBUG [RS:1;jenkins-hbase4:36937] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:43,469 INFO [RS:2;jenkins-hbase4:35077] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:43,469 INFO [RS:0;jenkins-hbase4:36361] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:43,469 INFO [RS:2;jenkins-hbase4:35077] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:43,469 DEBUG [RS:1;jenkins-hbase4:36937] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:43,469 INFO [RS:0;jenkins-hbase4:36361] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:43,470 DEBUG [RS:1;jenkins-hbase4:36937] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:43,470 INFO [RS:0;jenkins-hbase4:36361] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:43,470 DEBUG [RS:1;jenkins-hbase4:36937] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:43,470 DEBUG [RS:1;jenkins-hbase4:36937] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-21 00:14:43,470 DEBUG [RS:1;jenkins-hbase4:36937] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:43,470 DEBUG [RS:1;jenkins-hbase4:36937] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:43,470 DEBUG [RS:1;jenkins-hbase4:36937] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:43,470 DEBUG [RS:1;jenkins-hbase4:36937] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:43,473 INFO [RS:1;jenkins-hbase4:36937] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:43,473 INFO [RS:1;jenkins-hbase4:36937] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:43,473 INFO [RS:1;jenkins-hbase4:36937] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:43,481 INFO [RS:0;jenkins-hbase4:36361] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 00:14:43,481 INFO [RS:0;jenkins-hbase4:36361] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36361,1689898482701-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:43,484 INFO [RS:2;jenkins-hbase4:35077] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 00:14:43,484 INFO [RS:2;jenkins-hbase4:35077] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35077,1689898483033-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:43,489 INFO [RS:1;jenkins-hbase4:36937] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 00:14:43,490 INFO [RS:1;jenkins-hbase4:36937] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36937,1689898482870-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:43,493 INFO [RS:0;jenkins-hbase4:36361] regionserver.Replication(203): jenkins-hbase4.apache.org,36361,1689898482701 started 2023-07-21 00:14:43,493 INFO [RS:0;jenkins-hbase4:36361] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,36361,1689898482701, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:36361, sessionid=0x101853afe3c0001 2023-07-21 00:14:43,495 DEBUG [RS:0;jenkins-hbase4:36361] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 00:14:43,495 DEBUG [RS:0;jenkins-hbase4:36361] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,36361,1689898482701 2023-07-21 00:14:43,496 DEBUG [RS:0;jenkins-hbase4:36361] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36361,1689898482701' 2023-07-21 00:14:43,496 DEBUG [RS:0;jenkins-hbase4:36361] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 00:14:43,496 DEBUG [RS:0;jenkins-hbase4:36361] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 00:14:43,497 DEBUG [RS:0;jenkins-hbase4:36361] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 00:14:43,497 DEBUG [RS:0;jenkins-hbase4:36361] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 00:14:43,497 DEBUG [RS:0;jenkins-hbase4:36361] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,36361,1689898482701 2023-07-21 00:14:43,497 DEBUG [RS:0;jenkins-hbase4:36361] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36361,1689898482701' 2023-07-21 00:14:43,497 DEBUG [RS:0;jenkins-hbase4:36361] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 00:14:43,497 DEBUG [RS:0;jenkins-hbase4:36361] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 00:14:43,498 DEBUG [RS:0;jenkins-hbase4:36361] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 00:14:43,498 INFO [RS:0;jenkins-hbase4:36361] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 00:14:43,498 INFO [RS:0;jenkins-hbase4:36361] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 00:14:43,501 INFO [RS:2;jenkins-hbase4:35077] regionserver.Replication(203): jenkins-hbase4.apache.org,35077,1689898483033 started 2023-07-21 00:14:43,501 INFO [RS:2;jenkins-hbase4:35077] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,35077,1689898483033, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:35077, sessionid=0x101853afe3c0003 2023-07-21 00:14:43,501 DEBUG [RS:2;jenkins-hbase4:35077] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 00:14:43,501 DEBUG [RS:2;jenkins-hbase4:35077] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,35077,1689898483033 2023-07-21 00:14:43,502 DEBUG [RS:2;jenkins-hbase4:35077] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35077,1689898483033' 2023-07-21 00:14:43,502 DEBUG [RS:2;jenkins-hbase4:35077] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 00:14:43,502 DEBUG [RS:2;jenkins-hbase4:35077] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 00:14:43,502 DEBUG [RS:2;jenkins-hbase4:35077] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 00:14:43,502 DEBUG [RS:2;jenkins-hbase4:35077] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 00:14:43,502 DEBUG [RS:2;jenkins-hbase4:35077] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,35077,1689898483033 2023-07-21 00:14:43,502 DEBUG [RS:2;jenkins-hbase4:35077] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35077,1689898483033' 2023-07-21 00:14:43,502 DEBUG [RS:2;jenkins-hbase4:35077] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 00:14:43,503 DEBUG [RS:2;jenkins-hbase4:35077] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 00:14:43,503 DEBUG [RS:2;jenkins-hbase4:35077] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 00:14:43,503 INFO [RS:2;jenkins-hbase4:35077] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 00:14:43,503 INFO [RS:2;jenkins-hbase4:35077] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 00:14:43,506 INFO [RS:1;jenkins-hbase4:36937] regionserver.Replication(203): jenkins-hbase4.apache.org,36937,1689898482870 started 2023-07-21 00:14:43,506 INFO [RS:1;jenkins-hbase4:36937] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,36937,1689898482870, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:36937, sessionid=0x101853afe3c0002 2023-07-21 00:14:43,506 DEBUG [RS:1;jenkins-hbase4:36937] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 00:14:43,506 DEBUG [RS:1;jenkins-hbase4:36937] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,36937,1689898482870 2023-07-21 00:14:43,506 DEBUG [RS:1;jenkins-hbase4:36937] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36937,1689898482870' 2023-07-21 00:14:43,506 DEBUG [RS:1;jenkins-hbase4:36937] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 00:14:43,506 DEBUG [RS:1;jenkins-hbase4:36937] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 00:14:43,507 DEBUG [RS:1;jenkins-hbase4:36937] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 00:14:43,507 DEBUG [RS:1;jenkins-hbase4:36937] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 00:14:43,507 DEBUG [RS:1;jenkins-hbase4:36937] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,36937,1689898482870 2023-07-21 00:14:43,507 DEBUG [RS:1;jenkins-hbase4:36937] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36937,1689898482870' 2023-07-21 00:14:43,507 DEBUG [RS:1;jenkins-hbase4:36937] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 00:14:43,507 DEBUG [RS:1;jenkins-hbase4:36937] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 00:14:43,507 DEBUG [RS:1;jenkins-hbase4:36937] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 00:14:43,507 INFO [RS:1;jenkins-hbase4:36937] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 00:14:43,507 INFO [RS:1;jenkins-hbase4:36937] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 00:14:43,526 DEBUG [jenkins-hbase4:39747] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-21 00:14:43,526 DEBUG [jenkins-hbase4:39747] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 00:14:43,526 DEBUG [jenkins-hbase4:39747] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 00:14:43,527 DEBUG [jenkins-hbase4:39747] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 00:14:43,527 DEBUG [jenkins-hbase4:39747] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 00:14:43,527 DEBUG [jenkins-hbase4:39747] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 00:14:43,528 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,36937,1689898482870, state=OPENING 2023-07-21 00:14:43,529 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-21 00:14:43,530 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 00:14:43,531 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,36937,1689898482870}] 2023-07-21 00:14:43,531 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 00:14:43,599 INFO [RS:0;jenkins-hbase4:36361] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36361%2C1689898482701, suffix=, logDir=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/WALs/jenkins-hbase4.apache.org,36361,1689898482701, archiveDir=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/oldWALs, maxLogs=32 2023-07-21 00:14:43,605 INFO [RS:2;jenkins-hbase4:35077] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35077%2C1689898483033, suffix=, logDir=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/WALs/jenkins-hbase4.apache.org,35077,1689898483033, archiveDir=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/oldWALs, maxLogs=32 2023-07-21 00:14:43,609 INFO [RS:1;jenkins-hbase4:36937] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36937%2C1689898482870, suffix=, logDir=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/WALs/jenkins-hbase4.apache.org,36937,1689898482870, archiveDir=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/oldWALs, maxLogs=32 2023-07-21 00:14:43,617 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46599,DS-39ad5284-7328-466c-9b07-31be4237cf46,DISK] 2023-07-21 00:14:43,617 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46795,DS-c92431fe-33f7-4b16-bb4f-ade70d333d1e,DISK] 2023-07-21 00:14:43,617 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44315,DS-249fb10b-85af-46b8-b83d-b18a49e4617b,DISK] 2023-07-21 00:14:43,620 WARN [ReadOnlyZKClient-127.0.0.1:57003@0x75cd413a] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-21 00:14:43,620 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39747,1689898482511] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 00:14:43,622 INFO [RS:0;jenkins-hbase4:36361] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/WALs/jenkins-hbase4.apache.org,36361,1689898482701/jenkins-hbase4.apache.org%2C36361%2C1689898482701.1689898483600 2023-07-21 00:14:43,627 DEBUG [RS:0;jenkins-hbase4:36361] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46599,DS-39ad5284-7328-466c-9b07-31be4237cf46,DISK], DatanodeInfoWithStorage[127.0.0.1:44315,DS-249fb10b-85af-46b8-b83d-b18a49e4617b,DISK], DatanodeInfoWithStorage[127.0.0.1:46795,DS-c92431fe-33f7-4b16-bb4f-ade70d333d1e,DISK]] 2023-07-21 00:14:43,627 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38908, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 00:14:43,629 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=36937] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:38908 deadline: 1689898543628, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,36937,1689898482870 2023-07-21 00:14:43,629 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46795,DS-c92431fe-33f7-4b16-bb4f-ade70d333d1e,DISK] 2023-07-21 00:14:43,629 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46599,DS-39ad5284-7328-466c-9b07-31be4237cf46,DISK] 2023-07-21 00:14:43,629 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44315,DS-249fb10b-85af-46b8-b83d-b18a49e4617b,DISK] 2023-07-21 00:14:43,641 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46599,DS-39ad5284-7328-466c-9b07-31be4237cf46,DISK] 2023-07-21 00:14:43,641 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46795,DS-c92431fe-33f7-4b16-bb4f-ade70d333d1e,DISK] 2023-07-21 00:14:43,641 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44315,DS-249fb10b-85af-46b8-b83d-b18a49e4617b,DISK] 2023-07-21 00:14:43,641 INFO [RS:2;jenkins-hbase4:35077] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/WALs/jenkins-hbase4.apache.org,35077,1689898483033/jenkins-hbase4.apache.org%2C35077%2C1689898483033.1689898483605 2023-07-21 00:14:43,642 DEBUG [RS:2;jenkins-hbase4:35077] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46599,DS-39ad5284-7328-466c-9b07-31be4237cf46,DISK], DatanodeInfoWithStorage[127.0.0.1:46795,DS-c92431fe-33f7-4b16-bb4f-ade70d333d1e,DISK], DatanodeInfoWithStorage[127.0.0.1:44315,DS-249fb10b-85af-46b8-b83d-b18a49e4617b,DISK]] 2023-07-21 00:14:43,643 INFO [RS:1;jenkins-hbase4:36937] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/WALs/jenkins-hbase4.apache.org,36937,1689898482870/jenkins-hbase4.apache.org%2C36937%2C1689898482870.1689898483609 2023-07-21 00:14:43,643 DEBUG [RS:1;jenkins-hbase4:36937] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46795,DS-c92431fe-33f7-4b16-bb4f-ade70d333d1e,DISK], DatanodeInfoWithStorage[127.0.0.1:44315,DS-249fb10b-85af-46b8-b83d-b18a49e4617b,DISK], DatanodeInfoWithStorage[127.0.0.1:46599,DS-39ad5284-7328-466c-9b07-31be4237cf46,DISK]] 2023-07-21 00:14:43,686 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,36937,1689898482870 2023-07-21 00:14:43,687 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 00:14:43,689 INFO [RS-EventLoopGroup-14-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38910, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 00:14:43,692 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-21 00:14:43,693 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 00:14:43,695 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36937%2C1689898482870.meta, suffix=.meta, logDir=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/WALs/jenkins-hbase4.apache.org,36937,1689898482870, archiveDir=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/oldWALs, maxLogs=32 2023-07-21 00:14:43,716 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46795,DS-c92431fe-33f7-4b16-bb4f-ade70d333d1e,DISK] 2023-07-21 00:14:43,716 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46599,DS-39ad5284-7328-466c-9b07-31be4237cf46,DISK] 2023-07-21 00:14:43,716 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44315,DS-249fb10b-85af-46b8-b83d-b18a49e4617b,DISK] 2023-07-21 00:14:43,718 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/WALs/jenkins-hbase4.apache.org,36937,1689898482870/jenkins-hbase4.apache.org%2C36937%2C1689898482870.meta.1689898483695.meta 2023-07-21 00:14:43,718 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46795,DS-c92431fe-33f7-4b16-bb4f-ade70d333d1e,DISK], DatanodeInfoWithStorage[127.0.0.1:46599,DS-39ad5284-7328-466c-9b07-31be4237cf46,DISK], DatanodeInfoWithStorage[127.0.0.1:44315,DS-249fb10b-85af-46b8-b83d-b18a49e4617b,DISK]] 2023-07-21 00:14:43,718 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-21 00:14:43,719 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 00:14:43,719 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-21 00:14:43,719 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-21 00:14:43,719 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-21 00:14:43,719 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:43,719 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-21 00:14:43,719 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-21 00:14:43,721 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 00:14:43,722 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/hbase/meta/1588230740/info 2023-07-21 00:14:43,722 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/hbase/meta/1588230740/info 2023-07-21 00:14:43,722 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 00:14:43,723 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:43,723 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 00:14:43,723 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/hbase/meta/1588230740/rep_barrier 2023-07-21 00:14:43,723 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/hbase/meta/1588230740/rep_barrier 2023-07-21 00:14:43,724 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 00:14:43,724 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:43,724 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 00:14:43,725 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/hbase/meta/1588230740/table 2023-07-21 00:14:43,725 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/hbase/meta/1588230740/table 2023-07-21 00:14:43,725 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 00:14:43,726 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:43,726 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/hbase/meta/1588230740 2023-07-21 00:14:43,727 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/hbase/meta/1588230740 2023-07-21 00:14:43,729 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-21 00:14:43,731 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 00:14:43,731 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9406502400, jitterRate=-0.12395119667053223}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-21 00:14:43,731 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 00:14:43,732 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689898483685 2023-07-21 00:14:43,736 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-21 00:14:43,737 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-21 00:14:43,737 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,36937,1689898482870, state=OPEN 2023-07-21 00:14:43,748 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 00:14:43,748 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 00:14:43,759 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-21 00:14:43,759 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,36937,1689898482870 in 217 msec 2023-07-21 00:14:43,761 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-21 00:14:43,761 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 385 msec 2023-07-21 00:14:43,763 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 452 msec 2023-07-21 00:14:43,763 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689898483763, completionTime=-1 2023-07-21 00:14:43,763 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-21 00:14:43,763 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-21 00:14:43,772 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-21 00:14:43,772 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689898543772 2023-07-21 00:14:43,772 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689898603772 2023-07-21 00:14:43,772 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 9 msec 2023-07-21 00:14:43,779 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39747,1689898482511-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:43,779 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39747,1689898482511-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:43,779 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39747,1689898482511-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:43,779 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:39747, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:43,779 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:43,779 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-21 00:14:43,779 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-21 00:14:43,780 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-21 00:14:43,781 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-21 00:14:43,782 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 00:14:43,782 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 00:14:43,784 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/.tmp/data/hbase/namespace/02a141934bb11deb09903fdd06e94126 2023-07-21 00:14:43,784 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/.tmp/data/hbase/namespace/02a141934bb11deb09903fdd06e94126 empty. 2023-07-21 00:14:43,785 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/.tmp/data/hbase/namespace/02a141934bb11deb09903fdd06e94126 2023-07-21 00:14:43,785 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-21 00:14:43,799 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-21 00:14:43,801 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 02a141934bb11deb09903fdd06e94126, NAME => 'hbase:namespace,,1689898483779.02a141934bb11deb09903fdd06e94126.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/.tmp 2023-07-21 00:14:43,810 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689898483779.02a141934bb11deb09903fdd06e94126.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:43,810 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 02a141934bb11deb09903fdd06e94126, disabling compactions & flushes 2023-07-21 00:14:43,810 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689898483779.02a141934bb11deb09903fdd06e94126. 2023-07-21 00:14:43,810 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689898483779.02a141934bb11deb09903fdd06e94126. 2023-07-21 00:14:43,810 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689898483779.02a141934bb11deb09903fdd06e94126. after waiting 0 ms 2023-07-21 00:14:43,810 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689898483779.02a141934bb11deb09903fdd06e94126. 2023-07-21 00:14:43,810 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689898483779.02a141934bb11deb09903fdd06e94126. 2023-07-21 00:14:43,810 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 02a141934bb11deb09903fdd06e94126: 2023-07-21 00:14:43,813 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 00:14:43,814 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689898483779.02a141934bb11deb09903fdd06e94126.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689898483814"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898483814"}]},"ts":"1689898483814"} 2023-07-21 00:14:43,816 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 00:14:43,817 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 00:14:43,817 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689898483817"}]},"ts":"1689898483817"} 2023-07-21 00:14:43,818 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-21 00:14:43,822 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 00:14:43,822 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 00:14:43,822 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 00:14:43,822 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 00:14:43,822 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 00:14:43,822 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=02a141934bb11deb09903fdd06e94126, ASSIGN}] 2023-07-21 00:14:43,824 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=02a141934bb11deb09903fdd06e94126, ASSIGN 2023-07-21 00:14:43,824 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=02a141934bb11deb09903fdd06e94126, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,36361,1689898482701; forceNewPlan=false, retain=false 2023-07-21 00:14:43,932 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39747,1689898482511] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 00:14:43,934 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39747,1689898482511] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-21 00:14:43,936 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 00:14:43,937 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 00:14:43,938 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/.tmp/data/hbase/rsgroup/e3201ae0898cc64d046937666d6d312f 2023-07-21 00:14:43,939 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/.tmp/data/hbase/rsgroup/e3201ae0898cc64d046937666d6d312f empty. 2023-07-21 00:14:43,939 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/.tmp/data/hbase/rsgroup/e3201ae0898cc64d046937666d6d312f 2023-07-21 00:14:43,939 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-21 00:14:43,950 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-21 00:14:43,951 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => e3201ae0898cc64d046937666d6d312f, NAME => 'hbase:rsgroup,,1689898483932.e3201ae0898cc64d046937666d6d312f.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/.tmp 2023-07-21 00:14:43,962 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689898483932.e3201ae0898cc64d046937666d6d312f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:43,962 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing e3201ae0898cc64d046937666d6d312f, disabling compactions & flushes 2023-07-21 00:14:43,962 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689898483932.e3201ae0898cc64d046937666d6d312f. 2023-07-21 00:14:43,962 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689898483932.e3201ae0898cc64d046937666d6d312f. 2023-07-21 00:14:43,962 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689898483932.e3201ae0898cc64d046937666d6d312f. after waiting 0 ms 2023-07-21 00:14:43,962 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689898483932.e3201ae0898cc64d046937666d6d312f. 2023-07-21 00:14:43,962 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689898483932.e3201ae0898cc64d046937666d6d312f. 2023-07-21 00:14:43,962 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for e3201ae0898cc64d046937666d6d312f: 2023-07-21 00:14:43,964 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 00:14:43,965 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689898483932.e3201ae0898cc64d046937666d6d312f.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689898483965"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898483965"}]},"ts":"1689898483965"} 2023-07-21 00:14:43,967 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 00:14:43,967 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 00:14:43,968 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689898483968"}]},"ts":"1689898483968"} 2023-07-21 00:14:43,969 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-21 00:14:43,972 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 00:14:43,972 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 00:14:43,972 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 00:14:43,972 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 00:14:43,972 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 00:14:43,972 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=e3201ae0898cc64d046937666d6d312f, ASSIGN}] 2023-07-21 00:14:43,973 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=e3201ae0898cc64d046937666d6d312f, ASSIGN 2023-07-21 00:14:43,974 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=e3201ae0898cc64d046937666d6d312f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,36937,1689898482870; forceNewPlan=false, retain=false 2023-07-21 00:14:43,974 INFO [jenkins-hbase4:39747] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-21 00:14:43,976 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=02a141934bb11deb09903fdd06e94126, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36361,1689898482701 2023-07-21 00:14:43,976 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689898483779.02a141934bb11deb09903fdd06e94126.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689898483976"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898483976"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898483976"}]},"ts":"1689898483976"} 2023-07-21 00:14:43,976 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=e3201ae0898cc64d046937666d6d312f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36937,1689898482870 2023-07-21 00:14:43,977 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689898483932.e3201ae0898cc64d046937666d6d312f.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689898483976"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898483976"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898483976"}]},"ts":"1689898483976"} 2023-07-21 00:14:43,978 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=5, state=RUNNABLE; OpenRegionProcedure 02a141934bb11deb09903fdd06e94126, server=jenkins-hbase4.apache.org,36361,1689898482701}] 2023-07-21 00:14:43,978 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure e3201ae0898cc64d046937666d6d312f, server=jenkins-hbase4.apache.org,36937,1689898482870}] 2023-07-21 00:14:44,130 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,36361,1689898482701 2023-07-21 00:14:44,131 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 00:14:44,132 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39696, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 00:14:44,135 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689898483932.e3201ae0898cc64d046937666d6d312f. 2023-07-21 00:14:44,135 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e3201ae0898cc64d046937666d6d312f, NAME => 'hbase:rsgroup,,1689898483932.e3201ae0898cc64d046937666d6d312f.', STARTKEY => '', ENDKEY => ''} 2023-07-21 00:14:44,135 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 00:14:44,135 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689898483932.e3201ae0898cc64d046937666d6d312f. service=MultiRowMutationService 2023-07-21 00:14:44,135 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-21 00:14:44,135 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup e3201ae0898cc64d046937666d6d312f 2023-07-21 00:14:44,136 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689898483932.e3201ae0898cc64d046937666d6d312f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:44,136 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e3201ae0898cc64d046937666d6d312f 2023-07-21 00:14:44,136 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e3201ae0898cc64d046937666d6d312f 2023-07-21 00:14:44,137 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689898483779.02a141934bb11deb09903fdd06e94126. 2023-07-21 00:14:44,137 INFO [StoreOpener-e3201ae0898cc64d046937666d6d312f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region e3201ae0898cc64d046937666d6d312f 2023-07-21 00:14:44,137 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 02a141934bb11deb09903fdd06e94126, NAME => 'hbase:namespace,,1689898483779.02a141934bb11deb09903fdd06e94126.', STARTKEY => '', ENDKEY => ''} 2023-07-21 00:14:44,138 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 02a141934bb11deb09903fdd06e94126 2023-07-21 00:14:44,138 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689898483779.02a141934bb11deb09903fdd06e94126.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:44,138 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 02a141934bb11deb09903fdd06e94126 2023-07-21 00:14:44,138 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 02a141934bb11deb09903fdd06e94126 2023-07-21 00:14:44,139 DEBUG [StoreOpener-e3201ae0898cc64d046937666d6d312f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/hbase/rsgroup/e3201ae0898cc64d046937666d6d312f/m 2023-07-21 00:14:44,139 INFO [StoreOpener-02a141934bb11deb09903fdd06e94126-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 02a141934bb11deb09903fdd06e94126 2023-07-21 00:14:44,139 DEBUG [StoreOpener-e3201ae0898cc64d046937666d6d312f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/hbase/rsgroup/e3201ae0898cc64d046937666d6d312f/m 2023-07-21 00:14:44,139 INFO [StoreOpener-e3201ae0898cc64d046937666d6d312f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e3201ae0898cc64d046937666d6d312f columnFamilyName m 2023-07-21 00:14:44,140 INFO [StoreOpener-e3201ae0898cc64d046937666d6d312f-1] regionserver.HStore(310): Store=e3201ae0898cc64d046937666d6d312f/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:44,141 DEBUG [StoreOpener-02a141934bb11deb09903fdd06e94126-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/hbase/namespace/02a141934bb11deb09903fdd06e94126/info 2023-07-21 00:14:44,141 DEBUG [StoreOpener-02a141934bb11deb09903fdd06e94126-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/hbase/namespace/02a141934bb11deb09903fdd06e94126/info 2023-07-21 00:14:44,141 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/hbase/rsgroup/e3201ae0898cc64d046937666d6d312f 2023-07-21 00:14:44,141 INFO [StoreOpener-02a141934bb11deb09903fdd06e94126-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 02a141934bb11deb09903fdd06e94126 columnFamilyName info 2023-07-21 00:14:44,141 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/hbase/rsgroup/e3201ae0898cc64d046937666d6d312f 2023-07-21 00:14:44,141 INFO [StoreOpener-02a141934bb11deb09903fdd06e94126-1] regionserver.HStore(310): Store=02a141934bb11deb09903fdd06e94126/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:44,142 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/hbase/namespace/02a141934bb11deb09903fdd06e94126 2023-07-21 00:14:44,143 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/hbase/namespace/02a141934bb11deb09903fdd06e94126 2023-07-21 00:14:44,145 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e3201ae0898cc64d046937666d6d312f 2023-07-21 00:14:44,146 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 02a141934bb11deb09903fdd06e94126 2023-07-21 00:14:44,147 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/hbase/rsgroup/e3201ae0898cc64d046937666d6d312f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 00:14:44,148 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/hbase/namespace/02a141934bb11deb09903fdd06e94126/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 00:14:44,148 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e3201ae0898cc64d046937666d6d312f; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@21596b56, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 00:14:44,148 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e3201ae0898cc64d046937666d6d312f: 2023-07-21 00:14:44,148 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 02a141934bb11deb09903fdd06e94126; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9891325280, jitterRate=-0.07879854738712311}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 00:14:44,148 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 02a141934bb11deb09903fdd06e94126: 2023-07-21 00:14:44,149 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689898483932.e3201ae0898cc64d046937666d6d312f., pid=9, masterSystemTime=1689898484130 2023-07-21 00:14:44,149 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689898483779.02a141934bb11deb09903fdd06e94126., pid=8, masterSystemTime=1689898484130 2023-07-21 00:14:44,153 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689898483932.e3201ae0898cc64d046937666d6d312f. 2023-07-21 00:14:44,153 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689898483932.e3201ae0898cc64d046937666d6d312f. 2023-07-21 00:14:44,153 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=e3201ae0898cc64d046937666d6d312f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,36937,1689898482870 2023-07-21 00:14:44,154 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689898483932.e3201ae0898cc64d046937666d6d312f.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689898484153"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689898484153"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689898484153"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689898484153"}]},"ts":"1689898484153"} 2023-07-21 00:14:44,154 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689898483779.02a141934bb11deb09903fdd06e94126. 2023-07-21 00:14:44,154 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689898483779.02a141934bb11deb09903fdd06e94126. 2023-07-21 00:14:44,155 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=02a141934bb11deb09903fdd06e94126, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,36361,1689898482701 2023-07-21 00:14:44,155 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689898483779.02a141934bb11deb09903fdd06e94126.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689898484155"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689898484155"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689898484155"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689898484155"}]},"ts":"1689898484155"} 2023-07-21 00:14:44,158 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-21 00:14:44,158 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure e3201ae0898cc64d046937666d6d312f, server=jenkins-hbase4.apache.org,36937,1689898482870 in 178 msec 2023-07-21 00:14:44,159 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=5 2023-07-21 00:14:44,159 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=5, state=SUCCESS; OpenRegionProcedure 02a141934bb11deb09903fdd06e94126, server=jenkins-hbase4.apache.org,36361,1689898482701 in 179 msec 2023-07-21 00:14:44,160 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=6 2023-07-21 00:14:44,160 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=e3201ae0898cc64d046937666d6d312f, ASSIGN in 186 msec 2023-07-21 00:14:44,160 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-21 00:14:44,160 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=02a141934bb11deb09903fdd06e94126, ASSIGN in 337 msec 2023-07-21 00:14:44,161 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 00:14:44,161 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689898484161"}]},"ts":"1689898484161"} 2023-07-21 00:14:44,161 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 00:14:44,161 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689898484161"}]},"ts":"1689898484161"} 2023-07-21 00:14:44,162 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-21 00:14:44,163 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-21 00:14:44,165 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 00:14:44,166 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 233 msec 2023-07-21 00:14:44,167 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 00:14:44,168 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 388 msec 2023-07-21 00:14:44,181 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-21 00:14:44,182 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-21 00:14:44,182 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 00:14:44,186 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 00:14:44,188 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39700, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 00:14:44,190 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-21 00:14:44,199 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 00:14:44,201 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 10 msec 2023-07-21 00:14:44,212 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-21 00:14:44,219 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 00:14:44,223 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 9 msec 2023-07-21 00:14:44,236 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-21 00:14:44,237 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39747,1689898482511] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-21 00:14:44,237 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39747,1689898482511] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-21 00:14:44,239 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-21 00:14:44,239 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.041sec 2023-07-21 00:14:44,240 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-21 00:14:44,240 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-21 00:14:44,240 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-21 00:14:44,240 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39747,1689898482511-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-21 00:14:44,240 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39747,1689898482511-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-21 00:14:44,240 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-21 00:14:44,241 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 00:14:44,241 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39747,1689898482511] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:44,243 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39747,1689898482511] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 00:14:44,245 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39747,1689898482511] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-21 00:14:44,287 DEBUG [Listener at localhost/44727] zookeeper.ReadOnlyZKClient(139): Connect 0x03095489 to 127.0.0.1:57003 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 00:14:44,295 DEBUG [Listener at localhost/44727] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7b60fb34, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 00:14:44,297 DEBUG [hconnection-0x7867acb9-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 00:14:44,299 INFO [RS-EventLoopGroup-14-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38926, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 00:14:44,300 INFO [Listener at localhost/44727] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,39747,1689898482511 2023-07-21 00:14:44,301 INFO [Listener at localhost/44727] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:44,303 DEBUG [Listener at localhost/44727] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-21 00:14:44,307 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41140, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-21 00:14:44,310 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-21 00:14:44,310 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 00:14:44,311 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-21 00:14:44,312 DEBUG [Listener at localhost/44727] zookeeper.ReadOnlyZKClient(139): Connect 0x440581f9 to 127.0.0.1:57003 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 00:14:44,329 DEBUG [Listener at localhost/44727] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1c2ffa4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 00:14:44,329 INFO [Listener at localhost/44727] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:57003 2023-07-21 00:14:44,333 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 00:14:44,335 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x101853afe3c000a connected 2023-07-21 00:14:44,337 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:44,338 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:44,342 INFO [Listener at localhost/44727] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-21 00:14:44,359 INFO [Listener at localhost/44727] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-21 00:14:44,359 INFO [Listener at localhost/44727] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 00:14:44,359 INFO [Listener at localhost/44727] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 00:14:44,359 INFO [Listener at localhost/44727] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 00:14:44,360 INFO [Listener at localhost/44727] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 00:14:44,360 INFO [Listener at localhost/44727] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 00:14:44,360 INFO [Listener at localhost/44727] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 00:14:44,360 INFO [Listener at localhost/44727] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41373 2023-07-21 00:14:44,361 INFO [Listener at localhost/44727] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 00:14:44,363 DEBUG [Listener at localhost/44727] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 00:14:44,363 INFO [Listener at localhost/44727] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 00:14:44,365 INFO [Listener at localhost/44727] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 00:14:44,365 INFO [Listener at localhost/44727] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41373 connecting to ZooKeeper ensemble=127.0.0.1:57003 2023-07-21 00:14:44,370 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): regionserver:413730x0, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 00:14:44,372 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41373-0x101853afe3c000b connected 2023-07-21 00:14:44,372 DEBUG [Listener at localhost/44727] zookeeper.ZKUtil(162): regionserver:41373-0x101853afe3c000b, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 00:14:44,372 DEBUG [Listener at localhost/44727] zookeeper.ZKUtil(162): regionserver:41373-0x101853afe3c000b, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-21 00:14:44,373 DEBUG [Listener at localhost/44727] zookeeper.ZKUtil(164): regionserver:41373-0x101853afe3c000b, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 00:14:44,374 DEBUG [Listener at localhost/44727] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41373 2023-07-21 00:14:44,374 DEBUG [Listener at localhost/44727] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41373 2023-07-21 00:14:44,375 DEBUG [Listener at localhost/44727] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41373 2023-07-21 00:14:44,375 DEBUG [Listener at localhost/44727] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41373 2023-07-21 00:14:44,376 DEBUG [Listener at localhost/44727] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41373 2023-07-21 00:14:44,378 INFO [Listener at localhost/44727] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 00:14:44,378 INFO [Listener at localhost/44727] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 00:14:44,378 INFO [Listener at localhost/44727] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 00:14:44,378 INFO [Listener at localhost/44727] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 00:14:44,378 INFO [Listener at localhost/44727] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 00:14:44,378 INFO [Listener at localhost/44727] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 00:14:44,379 INFO [Listener at localhost/44727] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 00:14:44,379 INFO [Listener at localhost/44727] http.HttpServer(1146): Jetty bound to port 39969 2023-07-21 00:14:44,379 INFO [Listener at localhost/44727] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 00:14:44,384 INFO [Listener at localhost/44727] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 00:14:44,384 INFO [Listener at localhost/44727] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@23520836{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/hadoop.log.dir/,AVAILABLE} 2023-07-21 00:14:44,384 INFO [Listener at localhost/44727] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 00:14:44,384 INFO [Listener at localhost/44727] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@44357807{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 00:14:44,498 INFO [Listener at localhost/44727] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 00:14:44,498 INFO [Listener at localhost/44727] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 00:14:44,498 INFO [Listener at localhost/44727] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 00:14:44,499 INFO [Listener at localhost/44727] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 00:14:44,499 INFO [Listener at localhost/44727] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 00:14:44,500 INFO [Listener at localhost/44727] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@387388f{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/java.io.tmpdir/jetty-0_0_0_0-39969-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3314101281157107670/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 00:14:44,502 INFO [Listener at localhost/44727] server.AbstractConnector(333): Started ServerConnector@7fed29c6{HTTP/1.1, (http/1.1)}{0.0.0.0:39969} 2023-07-21 00:14:44,502 INFO [Listener at localhost/44727] server.Server(415): Started @42313ms 2023-07-21 00:14:44,504 INFO [RS:3;jenkins-hbase4:41373] regionserver.HRegionServer(951): ClusterId : 5d6825f5-0ec3-416c-9136-e6638abbb2b4 2023-07-21 00:14:44,504 DEBUG [RS:3;jenkins-hbase4:41373] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 00:14:44,506 DEBUG [RS:3;jenkins-hbase4:41373] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 00:14:44,506 DEBUG [RS:3;jenkins-hbase4:41373] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 00:14:44,509 DEBUG [RS:3;jenkins-hbase4:41373] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 00:14:44,511 DEBUG [RS:3;jenkins-hbase4:41373] zookeeper.ReadOnlyZKClient(139): Connect 0x67aae9f8 to 127.0.0.1:57003 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 00:14:44,516 DEBUG [RS:3;jenkins-hbase4:41373] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@696d0b7d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 00:14:44,516 DEBUG [RS:3;jenkins-hbase4:41373] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6d528288, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-21 00:14:44,524 DEBUG [RS:3;jenkins-hbase4:41373] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:41373 2023-07-21 00:14:44,524 INFO [RS:3;jenkins-hbase4:41373] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 00:14:44,524 INFO [RS:3;jenkins-hbase4:41373] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 00:14:44,524 DEBUG [RS:3;jenkins-hbase4:41373] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 00:14:44,525 INFO [RS:3;jenkins-hbase4:41373] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,39747,1689898482511 with isa=jenkins-hbase4.apache.org/172.31.14.131:41373, startcode=1689898484358 2023-07-21 00:14:44,525 DEBUG [RS:3;jenkins-hbase4:41373] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 00:14:44,528 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50575, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 00:14:44,528 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39747] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,41373,1689898484358 2023-07-21 00:14:44,528 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39747,1689898482511] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-21 00:14:44,528 DEBUG [RS:3;jenkins-hbase4:41373] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b 2023-07-21 00:14:44,528 DEBUG [RS:3;jenkins-hbase4:41373] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40339 2023-07-21 00:14:44,528 DEBUG [RS:3;jenkins-hbase4:41373] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43831 2023-07-21 00:14:44,534 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 00:14:44,534 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): regionserver:35077-0x101853afe3c0003, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 00:14:44,534 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): regionserver:36361-0x101853afe3c0001, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 00:14:44,534 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): regionserver:36937-0x101853afe3c0002, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 00:14:44,534 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39747,1689898482511] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:44,534 DEBUG [RS:3;jenkins-hbase4:41373] zookeeper.ZKUtil(162): regionserver:41373-0x101853afe3c000b, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41373,1689898484358 2023-07-21 00:14:44,534 WARN [RS:3;jenkins-hbase4:41373] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 00:14:44,535 INFO [RS:3;jenkins-hbase4:41373] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 00:14:44,535 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,41373,1689898484358] 2023-07-21 00:14:44,535 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39747,1689898482511] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-21 00:14:44,535 DEBUG [RS:3;jenkins-hbase4:41373] regionserver.HRegionServer(1948): logDir=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/WALs/jenkins-hbase4.apache.org,41373,1689898484358 2023-07-21 00:14:44,535 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36361-0x101853afe3c0001, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35077,1689898483033 2023-07-21 00:14:44,535 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36937-0x101853afe3c0002, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35077,1689898483033 2023-07-21 00:14:44,535 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35077-0x101853afe3c0003, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35077,1689898483033 2023-07-21 00:14:44,540 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39747,1689898482511] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-21 00:14:44,540 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36937-0x101853afe3c0002, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36937,1689898482870 2023-07-21 00:14:44,540 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35077-0x101853afe3c0003, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36937,1689898482870 2023-07-21 00:14:44,540 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36361-0x101853afe3c0001, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36937,1689898482870 2023-07-21 00:14:44,541 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36937-0x101853afe3c0002, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36361,1689898482701 2023-07-21 00:14:44,541 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35077-0x101853afe3c0003, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36361,1689898482701 2023-07-21 00:14:44,541 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36361-0x101853afe3c0001, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36361,1689898482701 2023-07-21 00:14:44,541 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36937-0x101853afe3c0002, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41373,1689898484358 2023-07-21 00:14:44,541 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35077-0x101853afe3c0003, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41373,1689898484358 2023-07-21 00:14:44,541 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36361-0x101853afe3c0001, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41373,1689898484358 2023-07-21 00:14:44,542 DEBUG [RS:3;jenkins-hbase4:41373] zookeeper.ZKUtil(162): regionserver:41373-0x101853afe3c000b, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35077,1689898483033 2023-07-21 00:14:44,542 DEBUG [RS:3;jenkins-hbase4:41373] zookeeper.ZKUtil(162): regionserver:41373-0x101853afe3c000b, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36937,1689898482870 2023-07-21 00:14:44,542 DEBUG [RS:3;jenkins-hbase4:41373] zookeeper.ZKUtil(162): regionserver:41373-0x101853afe3c000b, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36361,1689898482701 2023-07-21 00:14:44,542 DEBUG [RS:3;jenkins-hbase4:41373] zookeeper.ZKUtil(162): regionserver:41373-0x101853afe3c000b, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41373,1689898484358 2023-07-21 00:14:44,543 DEBUG [RS:3;jenkins-hbase4:41373] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 00:14:44,543 INFO [RS:3;jenkins-hbase4:41373] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 00:14:44,544 INFO [RS:3;jenkins-hbase4:41373] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 00:14:44,544 INFO [RS:3;jenkins-hbase4:41373] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 00:14:44,544 INFO [RS:3;jenkins-hbase4:41373] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:44,545 INFO [RS:3;jenkins-hbase4:41373] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 00:14:44,546 INFO [RS:3;jenkins-hbase4:41373] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:44,547 DEBUG [RS:3;jenkins-hbase4:41373] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:44,547 DEBUG [RS:3;jenkins-hbase4:41373] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:44,547 DEBUG [RS:3;jenkins-hbase4:41373] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:44,547 DEBUG [RS:3;jenkins-hbase4:41373] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:44,547 DEBUG [RS:3;jenkins-hbase4:41373] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:44,547 DEBUG [RS:3;jenkins-hbase4:41373] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-21 00:14:44,547 DEBUG [RS:3;jenkins-hbase4:41373] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:44,547 DEBUG [RS:3;jenkins-hbase4:41373] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:44,547 DEBUG [RS:3;jenkins-hbase4:41373] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:44,547 DEBUG [RS:3;jenkins-hbase4:41373] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-21 00:14:44,548 INFO [RS:3;jenkins-hbase4:41373] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:44,548 INFO [RS:3;jenkins-hbase4:41373] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:44,548 INFO [RS:3;jenkins-hbase4:41373] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:44,559 INFO [RS:3;jenkins-hbase4:41373] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 00:14:44,559 INFO [RS:3;jenkins-hbase4:41373] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41373,1689898484358-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 00:14:44,569 INFO [RS:3;jenkins-hbase4:41373] regionserver.Replication(203): jenkins-hbase4.apache.org,41373,1689898484358 started 2023-07-21 00:14:44,569 INFO [RS:3;jenkins-hbase4:41373] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,41373,1689898484358, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:41373, sessionid=0x101853afe3c000b 2023-07-21 00:14:44,570 DEBUG [RS:3;jenkins-hbase4:41373] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 00:14:44,570 DEBUG [RS:3;jenkins-hbase4:41373] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,41373,1689898484358 2023-07-21 00:14:44,570 DEBUG [RS:3;jenkins-hbase4:41373] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41373,1689898484358' 2023-07-21 00:14:44,570 DEBUG [RS:3;jenkins-hbase4:41373] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 00:14:44,570 DEBUG [RS:3;jenkins-hbase4:41373] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 00:14:44,570 DEBUG [RS:3;jenkins-hbase4:41373] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 00:14:44,570 DEBUG [RS:3;jenkins-hbase4:41373] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 00:14:44,571 DEBUG [RS:3;jenkins-hbase4:41373] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,41373,1689898484358 2023-07-21 00:14:44,571 DEBUG [RS:3;jenkins-hbase4:41373] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41373,1689898484358' 2023-07-21 00:14:44,571 DEBUG [RS:3;jenkins-hbase4:41373] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 00:14:44,571 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 00:14:44,571 DEBUG [RS:3;jenkins-hbase4:41373] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 00:14:44,571 DEBUG [RS:3;jenkins-hbase4:41373] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 00:14:44,571 INFO [RS:3;jenkins-hbase4:41373] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 00:14:44,571 INFO [RS:3;jenkins-hbase4:41373] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 00:14:44,573 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:44,573 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:44,574 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 00:14:44,575 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 00:14:44,577 DEBUG [hconnection-0x5668b100-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 00:14:44,579 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38932, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 00:14:44,586 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:44,586 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:44,589 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39747] to rsgroup master 2023-07-21 00:14:44,589 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39747 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 00:14:44,589 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:41140 deadline: 1689899684589, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39747 is either offline or it does not exist. 2023-07-21 00:14:44,589 WARN [Listener at localhost/44727] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39747 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39747 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 00:14:44,590 INFO [Listener at localhost/44727] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:44,591 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:44,591 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:44,592 INFO [Listener at localhost/44727] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35077, jenkins-hbase4.apache.org:36361, jenkins-hbase4.apache.org:36937, jenkins-hbase4.apache.org:41373], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 00:14:44,592 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 00:14:44,592 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:44,640 INFO [Listener at localhost/44727] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=557 (was 502) Potentially hanging thread: RS-EventLoopGroup-10-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x1b6b8af6-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1969462625@qtp-170620087-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: nioEventLoopGroup-18-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41373 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 1 on default port 40339 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Listener at localhost/44727.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=36937 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-525384191-172.31.14.131-1689898481590:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=36937 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@2c9452e9[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b-prefix:jenkins-hbase4.apache.org,36937,1689898482870 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-798884888_17 at /127.0.0.1:39074 [Receiving block BP-525384191-172.31.14.131-1689898481590:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:35077-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-541-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=36361 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/44727-SendThread(127.0.0.1:57003) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: PacketResponder: BP-525384191-172.31.14.131-1689898481590:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-4939a796-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-798884888_17 at /127.0.0.1:39084 [Receiving block BP-525384191-172.31.14.131-1689898481590:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1480894769_17 at /127.0.0.1:54468 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=35077 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/38819-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-525384191-172.31.14.131-1689898481590:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:36361-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x1b6b8af6-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39747 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@33cb1f4a java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3884) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-525384191-172.31.14.131-1689898481590:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5668b100-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-525384191-172.31.14.131-1689898481590:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp435755432-2209 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=36361 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 3 on default port 42167 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/cluster_5ed5401b-14a3-7f60-6a0c-e81bd32f3c30/dfs/data/data2/current/BP-525384191-172.31.14.131-1689898481590 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=35077 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1247770871-2234 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-540-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x1b6b8af6-metaLookup-shared--pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:36361 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp984214216-2244 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/236525163.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-29 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ProcessThread(sid:0 cport:57003): sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:134) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41373 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 155611607@qtp-1910542669-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39919 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: qtp435755432-2208 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=36937 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/38819-SendThread(127.0.0.1:63294) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:369) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57003@0x75cd413a-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=36937 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server idle connection scanner for port 46267 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1941755354_17 at /127.0.0.1:38012 [Receiving block BP-525384191-172.31.14.131-1689898481590:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36937 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=39747 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-28 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp513610295-2141 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/236525163.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1247770871-2238 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1165135290@qtp-1077890064-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34569 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: jenkins-hbase4:36361Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp435755432-2207 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:36937 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 46267 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Listener at localhost/44727-SendThread(127.0.0.1:57003) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp789326476-2177 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1323622070-2516 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44727 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44727-SendThread(127.0.0.1:57003) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server handler 4 on default port 42167 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36361 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-25 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57003@0x03095489 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1513934835.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 40339 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=41373 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp984214216-2247 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/236525163.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@6de73a8d java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:451) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44727-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp513610295-2143 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44727-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1480894769_17 at /127.0.0.1:38034 [Receiving block BP-525384191-172.31.14.131-1689898481590:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-34 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp789326476-2174 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39747 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-525384191-172.31.14.131-1689898481590:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57003@0x03095489-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: jenkins-hbase4:39747 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: PacketResponder: BP-525384191-172.31.14.131-1689898481590:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57003@0x03095489-SendThread(127.0.0.1:57003) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp1247770871-2237 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp789326476-2175 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost:42959 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@64f7ec49 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp789326476-2173-acceptor-0@34055319-ServerConnector@521a0479{HTTP/1.1, (http/1.1)}{0.0.0.0:40557} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@3e783fd5 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-272e229-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:41373 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35077 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57003@0x75cd413a-SendThread(127.0.0.1:57003) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/MasterData-prefix:jenkins-hbase4.apache.org,39747,1689898482511 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp513610295-2144 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1493833170) connection to localhost/127.0.0.1:42959 from jenkins.hfs.5 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Server handler 3 on default port 46267 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp435755432-2203-acceptor-0@326ce93a-ServerConnector@db0735{HTTP/1.1, (http/1.1)}{0.0.0.0:40861} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@45906744 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x1b6b8af6-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost:40339 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp984214216-2248-acceptor-0@7b545ac7-ServerConnector@346e13ed{HTTP/1.1, (http/1.1)}{0.0.0.0:37413} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp789326476-2179 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1886293303@qtp-2084139685-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38945 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1480894769_17 at /127.0.0.1:39054 [Receiving block BP-525384191-172.31.14.131-1689898481590:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1480894769_17 at /127.0.0.1:54378 [Receiving block BP-525384191-172.31.14.131-1689898481590:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 46267 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/cluster_5ed5401b-14a3-7f60-6a0c-e81bd32f3c30/dfs/data/data5/current/BP-525384191-172.31.14.131-1689898481590 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-525384191-172.31.14.131-1689898481590 heartbeating to localhost/127.0.0.1:40339 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@234ac78f[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:41373Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp984214216-2245 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/236525163.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5668b100-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 40339 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=36937 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689898483330 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: qtp513610295-2142-acceptor-0@58b65a93-ServerConnector@44888ac3{HTTP/1.1, (http/1.1)}{0.0.0.0:43831} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-525384191-172.31.14.131-1689898481590 heartbeating to localhost/127.0.0.1:40339 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-536-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1493833170) connection to localhost/127.0.0.1:40339 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Client (1493833170) connection to localhost/127.0.0.1:42959 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Server handler 2 on default port 44727 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Server handler 4 on default port 44727 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: CacheReplicationMonitor(855892632) sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/cluster_5ed5401b-14a3-7f60-6a0c-e81bd32f3c30/dfs/data/data6/current/BP-525384191-172.31.14.131-1689898481590 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44727-SendThread(127.0.0.1:57003) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63294@0x6caa8057 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1513934835.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-525384191-172.31.14.131-1689898481590:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36937 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/44727-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Listener at localhost/44727-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: LeaseRenewer:jenkins@localhost:42959 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1822624028@qtp-1910542669-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: qtp513610295-2145 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36937 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57003@0x440581f9-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1247770871-2235 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-798884888_17 at /127.0.0.1:54410 [Receiving block BP-525384191-172.31.14.131-1689898481590:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 40339 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=35077 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1323622070-2515 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-33 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-14-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/cluster_5ed5401b-14a3-7f60-6a0c-e81bd32f3c30/dfs/data/data2) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: PacketResponder: BP-525384191-172.31.14.131-1689898481590:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-27 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35077 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1493833170) connection to localhost/127.0.0.1:42959 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: PacketResponder: BP-525384191-172.31.14.131-1689898481590:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 42167 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36667,1689898476899 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57003@0x67aae9f8-SendThread(127.0.0.1:57003) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57003@0x75cd413a sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1513934835.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57003@0x0b6bf005-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Client (1493833170) connection to localhost/127.0.0.1:42959 from jenkins.hfs.6 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp984214216-2249 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 42167 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: pool-555-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1247770871-2236 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x1b6b8af6-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x1b6b8af6-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-545-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-546-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1493833170) connection to localhost/127.0.0.1:40339 from jenkins.hfs.7 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39747 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57003@0x633d50a3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1513934835.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=39747 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/44727-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp435755432-2202 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/236525163.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-798884888_17 at /127.0.0.1:38066 [Receiving block BP-525384191-172.31.14.131-1689898481590:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost:40339 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:41373-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 44727 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Session-HouseKeeper-60c0b8c9-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57003@0x633d50a3-SendThread(127.0.0.1:57003) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: pool-554-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1493833170) connection to localhost/127.0.0.1:40339 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ForkJoinPool-2-worker-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=36361 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x1b6b8af6-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=35077 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp513610295-2147 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-525384191-172.31.14.131-1689898481590:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:36937-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=41373 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-32 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36361 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1323622070-2514 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=36361 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@5020d06[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/cluster_5ed5401b-14a3-7f60-6a0c-e81bd32f3c30/dfs/data/data4/current/BP-525384191-172.31.14.131-1689898481590 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 42167 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36361 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=39747 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1493833170) connection to localhost/127.0.0.1:42959 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp1323622070-2518 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 46267 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Listener at localhost/44727-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=35077 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=39747 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-13-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1941755354_17 at /127.0.0.1:54354 [Receiving block BP-525384191-172.31.14.131-1689898481590:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp984214216-2250 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-23af71a0-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost:42959 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57003@0x7b1356a1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1513934835.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/cluster_5ed5401b-14a3-7f60-6a0c-e81bd32f3c30/dfs/data/data1) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: PacketResponder: BP-525384191-172.31.14.131-1689898481590:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-525384191-172.31.14.131-1689898481590 heartbeating to localhost/127.0.0.1:40339 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1941755354_17 at /127.0.0.1:39026 [Receiving block BP-525384191-172.31.14.131-1689898481590:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:40339 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/cluster_5ed5401b-14a3-7f60-6a0c-e81bd32f3c30/dfs/data/data6) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=39747 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/cluster_5ed5401b-14a3-7f60-6a0c-e81bd32f3c30/dfs/data/data1/current/BP-525384191-172.31.14.131-1689898481590 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:35077Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57003@0x67aae9f8-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1247770871-2239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp513610295-2148 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-31 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 3 on default port 40339 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=36361 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1323622070-2517 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57003@0x7b1356a1-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=41373 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1493833170) connection to localhost/127.0.0.1:40339 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41373 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57003@0x440581f9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1513934835.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp435755432-2205 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=36361 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b-prefix:jenkins-hbase4.apache.org,36937,1689898482870.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp789326476-2172 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/236525163.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp435755432-2204 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1323622070-2512-acceptor-0@48119b9b-ServerConnector@7fed29c6{HTTP/1.1, (http/1.1)}{0.0.0.0:39969} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-539-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-525384191-172.31.14.131-1689898481590:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=35077 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/44727.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: pool-534-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1493833170) connection to localhost/127.0.0.1:40339 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@26f01e9a sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44727.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: Timer-26 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57003@0x440581f9-SendThread(127.0.0.1:57003) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server handler 0 on default port 44727 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-14-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44727-SendThread(127.0.0.1:57003) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server idle connection scanner for port 44727 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/cluster_5ed5401b-14a3-7f60-6a0c-e81bd32f3c30/dfs/data/data3) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: qtp984214216-2251 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost:40339 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@3ed2516 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3842) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@5fc9d753 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:244) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1247770871-2232 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/236525163.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689898483330 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: jenkins-hbase4:36937Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-24 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b-prefix:jenkins-hbase4.apache.org,35077,1689898483033 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-798884888_17 at /127.0.0.1:54402 [Receiving block BP-525384191-172.31.14.131-1689898481590:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp513610295-2146 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 46267 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=39747 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Session-HouseKeeper-6b56683c-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=36937 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-559-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1247770871-2233-acceptor-0@6c6cdb65-ServerConnector@77e9a657{HTTP/1.1, (http/1.1)}{0.0.0.0:37945} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@508cccdf java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3975) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 42167 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41373 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp789326476-2176 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63294@0x6caa8057-SendThread(127.0.0.1:63294) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.ClientCnxnSocketNIO.cleanup(ClientCnxnSocketNIO.java:228) org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1338) org.apache.zookeeper.ClientCnxn$SendThread.cleanAndNotifyState(ClientCnxn.java:1276) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1254) Potentially hanging thread: hconnection-0x1b6b8af6-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1941755354_17 at /127.0.0.1:39002 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=41373 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1927387857_17 at /127.0.0.1:39058 [Receiving block BP-525384191-172.31.14.131-1689898481590:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63294@0x6caa8057-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Timer-30 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Client (1493833170) connection to localhost/127.0.0.1:40339 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=35077 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57003@0x7b1356a1-SendThread(127.0.0.1:57003) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: PacketResponder: BP-525384191-172.31.14.131-1689898481590:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-798884888_17 at /127.0.0.1:37980 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=41373 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b-prefix:jenkins-hbase4.apache.org,36361,1689898482701 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@a2e3589 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1289323338@qtp-2084139685-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57003@0x0b6bf005-SendThread(127.0.0.1:57003) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: pool-550-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1867500762@qtp-1077890064-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: Listener at localhost/44727-SendThread(127.0.0.1:57003) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Timer-35 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp1323622070-2513 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7867acb9-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39747,1689898482511 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/cluster_5ed5401b-14a3-7f60-6a0c-e81bd32f3c30/dfs/data/data4) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: IPC Server handler 3 on default port 44727 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1323622070-2511 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/236525163.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57003@0x0b6bf005 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1513934835.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44727.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: IPC Server handler 4 on default port 40339 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57003@0x67aae9f8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1513934835.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 465959940@qtp-170620087-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36749 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost:42959 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57003@0x633d50a3-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@d519891 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:528) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: M:0;jenkins-hbase4:39747 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=36937 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-798884888_17 at /127.0.0.1:38080 [Receiving block BP-525384191-172.31.14.131-1689898481590:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp435755432-2206 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: NIOServerCxnFactory.AcceptThread:localhost/127.0.0.1:57003 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.select(NIOServerCnxnFactory.java:229) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.run(NIOServerCnxnFactory.java:205) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1927387857_17 at /127.0.0.1:38050 [Receiving block BP-525384191-172.31.14.131-1689898481590:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp789326476-2178 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=35077 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@2cca143a java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-525384191-172.31.14.131-1689898481590:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/cluster_5ed5401b-14a3-7f60-6a0c-e81bd32f3c30/dfs/data/data3/current/BP-525384191-172.31.14.131-1689898481590 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41373 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=36361 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: nioEventLoopGroup-14-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp984214216-2246 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/236525163.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1927387857_17 at /127.0.0.1:54388 [Receiving block BP-525384191-172.31.14.131-1689898481590:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:35077 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/cluster_5ed5401b-14a3-7f60-6a0c-e81bd32f3c30/dfs/data/data5) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@3cadfbe7 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=821 (was 766) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=497 (was 574), ProcessCount=174 (was 174), AvailableMemoryMB=2382 (was 2667) 2023-07-21 00:14:44,643 WARN [Listener at localhost/44727] hbase.ResourceChecker(130): Thread=557 is superior to 500 2023-07-21 00:14:44,660 INFO [Listener at localhost/44727] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=557, OpenFileDescriptor=821, MaxFileDescriptor=60000, SystemLoadAverage=497, ProcessCount=174, AvailableMemoryMB=2382 2023-07-21 00:14:44,660 WARN [Listener at localhost/44727] hbase.ResourceChecker(130): Thread=557 is superior to 500 2023-07-21 00:14:44,660 INFO [Listener at localhost/44727] rsgroup.TestRSGroupsBase(132): testNotMoveTableToNullRSGroupWhenCreatingExistingTable 2023-07-21 00:14:44,664 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:44,664 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:44,665 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 00:14:44,665 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 00:14:44,665 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 00:14:44,665 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 00:14:44,665 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 00:14:44,666 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 00:14:44,669 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:44,669 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 00:14:44,670 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 00:14:44,673 INFO [Listener at localhost/44727] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 00:14:44,673 INFO [RS:3;jenkins-hbase4:41373] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41373%2C1689898484358, suffix=, logDir=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/WALs/jenkins-hbase4.apache.org,41373,1689898484358, archiveDir=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/oldWALs, maxLogs=32 2023-07-21 00:14:44,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 00:14:44,675 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:44,676 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:44,678 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 00:14:44,679 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 00:14:44,681 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:44,681 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:44,683 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39747] to rsgroup master 2023-07-21 00:14:44,683 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39747 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 00:14:44,683 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] ipc.CallRunner(144): callId: 48 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:41140 deadline: 1689899684683, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39747 is either offline or it does not exist. 2023-07-21 00:14:44,684 WARN [Listener at localhost/44727] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39747 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39747 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 00:14:44,686 INFO [Listener at localhost/44727] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:44,686 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:44,686 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:44,687 INFO [Listener at localhost/44727] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35077, jenkins-hbase4.apache.org:36361, jenkins-hbase4.apache.org:36937, jenkins-hbase4.apache.org:41373], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 00:14:44,688 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 00:14:44,688 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:44,692 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46795,DS-c92431fe-33f7-4b16-bb4f-ade70d333d1e,DISK] 2023-07-21 00:14:44,692 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 00:14:44,692 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46599,DS-39ad5284-7328-466c-9b07-31be4237cf46,DISK] 2023-07-21 00:14:44,692 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44315,DS-249fb10b-85af-46b8-b83d-b18a49e4617b,DISK] 2023-07-21 00:14:44,694 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-21 00:14:44,695 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 00:14:44,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "t1" procId is: 12 2023-07-21 00:14:44,696 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-21 00:14:44,698 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:44,699 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:44,699 INFO [RS:3;jenkins-hbase4:41373] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/WALs/jenkins-hbase4.apache.org,41373,1689898484358/jenkins-hbase4.apache.org%2C41373%2C1689898484358.1689898484673 2023-07-21 00:14:44,699 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 00:14:44,699 DEBUG [RS:3;jenkins-hbase4:41373] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46795,DS-c92431fe-33f7-4b16-bb4f-ade70d333d1e,DISK], DatanodeInfoWithStorage[127.0.0.1:44315,DS-249fb10b-85af-46b8-b83d-b18a49e4617b,DISK], DatanodeInfoWithStorage[127.0.0.1:46599,DS-39ad5284-7328-466c-9b07-31be4237cf46,DISK]] 2023-07-21 00:14:44,701 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 00:14:44,702 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/.tmp/data/default/t1/50965c7e19a7cce5793c3741bf28c2d3 2023-07-21 00:14:44,703 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/.tmp/data/default/t1/50965c7e19a7cce5793c3741bf28c2d3 empty. 2023-07-21 00:14:44,703 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/.tmp/data/default/t1/50965c7e19a7cce5793c3741bf28c2d3 2023-07-21 00:14:44,703 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-21 00:14:44,716 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/.tmp/data/default/t1/.tabledesc/.tableinfo.0000000001 2023-07-21 00:14:44,717 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 50965c7e19a7cce5793c3741bf28c2d3, NAME => 't1,,1689898484692.50965c7e19a7cce5793c3741bf28c2d3.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='t1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/.tmp 2023-07-21 00:14:44,727 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(866): Instantiated t1,,1689898484692.50965c7e19a7cce5793c3741bf28c2d3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:44,727 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1604): Closing 50965c7e19a7cce5793c3741bf28c2d3, disabling compactions & flushes 2023-07-21 00:14:44,727 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1626): Closing region t1,,1689898484692.50965c7e19a7cce5793c3741bf28c2d3. 2023-07-21 00:14:44,727 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689898484692.50965c7e19a7cce5793c3741bf28c2d3. 2023-07-21 00:14:44,727 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689898484692.50965c7e19a7cce5793c3741bf28c2d3. after waiting 0 ms 2023-07-21 00:14:44,727 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689898484692.50965c7e19a7cce5793c3741bf28c2d3. 2023-07-21 00:14:44,727 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1838): Closed t1,,1689898484692.50965c7e19a7cce5793c3741bf28c2d3. 2023-07-21 00:14:44,727 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1558): Region close journal for 50965c7e19a7cce5793c3741bf28c2d3: 2023-07-21 00:14:44,729 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 00:14:44,730 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"t1,,1689898484692.50965c7e19a7cce5793c3741bf28c2d3.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689898484730"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898484730"}]},"ts":"1689898484730"} 2023-07-21 00:14:44,731 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 00:14:44,732 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 00:14:44,732 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689898484732"}]},"ts":"1689898484732"} 2023-07-21 00:14:44,733 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLING in hbase:meta 2023-07-21 00:14:44,737 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-21 00:14:44,737 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 00:14:44,737 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 00:14:44,737 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 00:14:44,737 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-21 00:14:44,737 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 00:14:44,737 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=50965c7e19a7cce5793c3741bf28c2d3, ASSIGN}] 2023-07-21 00:14:44,738 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=50965c7e19a7cce5793c3741bf28c2d3, ASSIGN 2023-07-21 00:14:44,742 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=t1, region=50965c7e19a7cce5793c3741bf28c2d3, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,36937,1689898482870; forceNewPlan=false, retain=false 2023-07-21 00:14:44,797 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-21 00:14:44,885 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-21 00:14:44,893 INFO [jenkins-hbase4:39747] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 00:14:44,893 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=50965c7e19a7cce5793c3741bf28c2d3, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36937,1689898482870 2023-07-21 00:14:44,894 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689898484692.50965c7e19a7cce5793c3741bf28c2d3.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689898484893"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898484893"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898484893"}]},"ts":"1689898484893"} 2023-07-21 00:14:44,899 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; OpenRegionProcedure 50965c7e19a7cce5793c3741bf28c2d3, server=jenkins-hbase4.apache.org,36937,1689898482870}] 2023-07-21 00:14:44,999 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-21 00:14:45,055 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open t1,,1689898484692.50965c7e19a7cce5793c3741bf28c2d3. 2023-07-21 00:14:45,055 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 50965c7e19a7cce5793c3741bf28c2d3, NAME => 't1,,1689898484692.50965c7e19a7cce5793c3741bf28c2d3.', STARTKEY => '', ENDKEY => ''} 2023-07-21 00:14:45,055 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table t1 50965c7e19a7cce5793c3741bf28c2d3 2023-07-21 00:14:45,055 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated t1,,1689898484692.50965c7e19a7cce5793c3741bf28c2d3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 00:14:45,055 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 50965c7e19a7cce5793c3741bf28c2d3 2023-07-21 00:14:45,055 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 50965c7e19a7cce5793c3741bf28c2d3 2023-07-21 00:14:45,057 INFO [StoreOpener-50965c7e19a7cce5793c3741bf28c2d3-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf1 of region 50965c7e19a7cce5793c3741bf28c2d3 2023-07-21 00:14:45,058 DEBUG [StoreOpener-50965c7e19a7cce5793c3741bf28c2d3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/default/t1/50965c7e19a7cce5793c3741bf28c2d3/cf1 2023-07-21 00:14:45,058 DEBUG [StoreOpener-50965c7e19a7cce5793c3741bf28c2d3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/default/t1/50965c7e19a7cce5793c3741bf28c2d3/cf1 2023-07-21 00:14:45,058 INFO [StoreOpener-50965c7e19a7cce5793c3741bf28c2d3-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 50965c7e19a7cce5793c3741bf28c2d3 columnFamilyName cf1 2023-07-21 00:14:45,059 INFO [StoreOpener-50965c7e19a7cce5793c3741bf28c2d3-1] regionserver.HStore(310): Store=50965c7e19a7cce5793c3741bf28c2d3/cf1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 00:14:45,060 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/default/t1/50965c7e19a7cce5793c3741bf28c2d3 2023-07-21 00:14:45,060 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/default/t1/50965c7e19a7cce5793c3741bf28c2d3 2023-07-21 00:14:45,063 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 50965c7e19a7cce5793c3741bf28c2d3 2023-07-21 00:14:45,065 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/default/t1/50965c7e19a7cce5793c3741bf28c2d3/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 00:14:45,066 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 50965c7e19a7cce5793c3741bf28c2d3; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10349678240, jitterRate=-0.036111101508140564}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 00:14:45,066 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 50965c7e19a7cce5793c3741bf28c2d3: 2023-07-21 00:14:45,066 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for t1,,1689898484692.50965c7e19a7cce5793c3741bf28c2d3., pid=14, masterSystemTime=1689898485051 2023-07-21 00:14:45,068 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for t1,,1689898484692.50965c7e19a7cce5793c3741bf28c2d3. 2023-07-21 00:14:45,068 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened t1,,1689898484692.50965c7e19a7cce5793c3741bf28c2d3. 2023-07-21 00:14:45,068 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=50965c7e19a7cce5793c3741bf28c2d3, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,36937,1689898482870 2023-07-21 00:14:45,068 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"t1,,1689898484692.50965c7e19a7cce5793c3741bf28c2d3.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689898485068"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689898485068"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689898485068"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689898485068"}]},"ts":"1689898485068"} 2023-07-21 00:14:45,071 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-07-21 00:14:45,071 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; OpenRegionProcedure 50965c7e19a7cce5793c3741bf28c2d3, server=jenkins-hbase4.apache.org,36937,1689898482870 in 171 msec 2023-07-21 00:14:45,074 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-21 00:14:45,074 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=t1, region=50965c7e19a7cce5793c3741bf28c2d3, ASSIGN in 334 msec 2023-07-21 00:14:45,074 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 00:14:45,075 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689898485074"}]},"ts":"1689898485074"} 2023-07-21 00:14:45,076 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLED in hbase:meta 2023-07-21 00:14:45,078 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 00:14:45,080 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=t1 in 385 msec 2023-07-21 00:14:45,300 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-21 00:14:45,300 INFO [Listener at localhost/44727] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:t1, procId: 12 completed 2023-07-21 00:14:45,300 DEBUG [Listener at localhost/44727] hbase.HBaseTestingUtility(3430): Waiting until all regions of table t1 get assigned. Timeout = 60000ms 2023-07-21 00:14:45,300 INFO [Listener at localhost/44727] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:45,302 INFO [Listener at localhost/44727] hbase.HBaseTestingUtility(3484): All regions for table t1 assigned to meta. Checking AM states. 2023-07-21 00:14:45,303 INFO [Listener at localhost/44727] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:45,303 INFO [Listener at localhost/44727] hbase.HBaseTestingUtility(3504): All regions for table t1 assigned. 2023-07-21 00:14:45,304 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 00:14:45,305 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-21 00:14:45,307 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 00:14:45,307 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableExistsException: t1 at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:243) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:85) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-21 00:14:45,308 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] ipc.CallRunner(144): callId: 65 service: MasterService methodName: CreateTable size: 353 connection: 172.31.14.131:41140 deadline: 1689898545304, exception=org.apache.hadoop.hbase.TableExistsException: t1 2023-07-21 00:14:45,309 INFO [Listener at localhost/44727] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:45,310 INFO [PEWorker-5] procedure2.ProcedureExecutor(1528): Rolled back pid=15, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.TableExistsException via master-create-table:org.apache.hadoop.hbase.TableExistsException: t1; CreateTableProcedure table=t1 exec-time=6 msec 2023-07-21 00:14:45,410 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 00:14:45,410 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:45,411 INFO [Listener at localhost/44727] client.HBaseAdmin$15(890): Started disable of t1 2023-07-21 00:14:45,412 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable t1 2023-07-21 00:14:45,413 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=t1 2023-07-21 00:14:45,416 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-21 00:14:45,419 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689898485419"}]},"ts":"1689898485419"} 2023-07-21 00:14:45,420 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLING in hbase:meta 2023-07-21 00:14:45,421 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set t1 to state=DISABLING 2023-07-21 00:14:45,422 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=50965c7e19a7cce5793c3741bf28c2d3, UNASSIGN}] 2023-07-21 00:14:45,423 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=50965c7e19a7cce5793c3741bf28c2d3, UNASSIGN 2023-07-21 00:14:45,424 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=50965c7e19a7cce5793c3741bf28c2d3, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36937,1689898482870 2023-07-21 00:14:45,424 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689898484692.50965c7e19a7cce5793c3741bf28c2d3.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689898485424"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689898485424"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689898485424"}]},"ts":"1689898485424"} 2023-07-21 00:14:45,425 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; CloseRegionProcedure 50965c7e19a7cce5793c3741bf28c2d3, server=jenkins-hbase4.apache.org,36937,1689898482870}] 2023-07-21 00:14:45,517 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-21 00:14:45,577 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 50965c7e19a7cce5793c3741bf28c2d3 2023-07-21 00:14:45,580 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 50965c7e19a7cce5793c3741bf28c2d3, disabling compactions & flushes 2023-07-21 00:14:45,580 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region t1,,1689898484692.50965c7e19a7cce5793c3741bf28c2d3. 2023-07-21 00:14:45,580 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689898484692.50965c7e19a7cce5793c3741bf28c2d3. 2023-07-21 00:14:45,580 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689898484692.50965c7e19a7cce5793c3741bf28c2d3. after waiting 0 ms 2023-07-21 00:14:45,580 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689898484692.50965c7e19a7cce5793c3741bf28c2d3. 2023-07-21 00:14:45,584 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/default/t1/50965c7e19a7cce5793c3741bf28c2d3/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-21 00:14:45,584 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed t1,,1689898484692.50965c7e19a7cce5793c3741bf28c2d3. 2023-07-21 00:14:45,584 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 50965c7e19a7cce5793c3741bf28c2d3: 2023-07-21 00:14:45,586 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 50965c7e19a7cce5793c3741bf28c2d3 2023-07-21 00:14:45,586 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=50965c7e19a7cce5793c3741bf28c2d3, regionState=CLOSED 2023-07-21 00:14:45,586 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"t1,,1689898484692.50965c7e19a7cce5793c3741bf28c2d3.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689898485586"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689898485586"}]},"ts":"1689898485586"} 2023-07-21 00:14:45,589 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-21 00:14:45,589 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; CloseRegionProcedure 50965c7e19a7cce5793c3741bf28c2d3, server=jenkins-hbase4.apache.org,36937,1689898482870 in 162 msec 2023-07-21 00:14:45,590 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-21 00:14:45,590 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=t1, region=50965c7e19a7cce5793c3741bf28c2d3, UNASSIGN in 167 msec 2023-07-21 00:14:45,590 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689898485590"}]},"ts":"1689898485590"} 2023-07-21 00:14:45,591 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLED in hbase:meta 2023-07-21 00:14:45,593 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set t1 to state=DISABLED 2023-07-21 00:14:45,594 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; DisableTableProcedure table=t1 in 181 msec 2023-07-21 00:14:45,718 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-21 00:14:45,718 INFO [Listener at localhost/44727] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:t1, procId: 16 completed 2023-07-21 00:14:45,719 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete t1 2023-07-21 00:14:45,720 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=t1 2023-07-21 00:14:45,721 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-21 00:14:45,722 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 't1' from rsgroup 'default' 2023-07-21 00:14:45,722 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=19, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=t1 2023-07-21 00:14:45,723 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:45,724 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:45,724 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 00:14:45,726 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/.tmp/data/default/t1/50965c7e19a7cce5793c3741bf28c2d3 2023-07-21 00:14:45,726 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-21 00:14:45,728 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/.tmp/data/default/t1/50965c7e19a7cce5793c3741bf28c2d3/cf1, FileablePath, hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/.tmp/data/default/t1/50965c7e19a7cce5793c3741bf28c2d3/recovered.edits] 2023-07-21 00:14:45,733 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/.tmp/data/default/t1/50965c7e19a7cce5793c3741bf28c2d3/recovered.edits/4.seqid to hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/archive/data/default/t1/50965c7e19a7cce5793c3741bf28c2d3/recovered.edits/4.seqid 2023-07-21 00:14:45,733 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/.tmp/data/default/t1/50965c7e19a7cce5793c3741bf28c2d3 2023-07-21 00:14:45,733 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-21 00:14:45,736 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=19, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=t1 2023-07-21 00:14:45,738 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of t1 from hbase:meta 2023-07-21 00:14:45,740 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 't1' descriptor. 2023-07-21 00:14:45,741 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=19, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=t1 2023-07-21 00:14:45,741 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 't1' from region states. 2023-07-21 00:14:45,741 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1,,1689898484692.50965c7e19a7cce5793c3741bf28c2d3.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689898485741"}]},"ts":"9223372036854775807"} 2023-07-21 00:14:45,743 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-21 00:14:45,743 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 50965c7e19a7cce5793c3741bf28c2d3, NAME => 't1,,1689898484692.50965c7e19a7cce5793c3741bf28c2d3.', STARTKEY => '', ENDKEY => ''}] 2023-07-21 00:14:45,743 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 't1' as deleted. 2023-07-21 00:14:45,743 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689898485743"}]},"ts":"9223372036854775807"} 2023-07-21 00:14:45,744 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table t1 state from META 2023-07-21 00:14:45,746 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=19, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-21 00:14:45,746 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=19, state=SUCCESS; DeleteTableProcedure table=t1 in 27 msec 2023-07-21 00:14:45,827 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-21 00:14:45,828 INFO [Listener at localhost/44727] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:t1, procId: 19 completed 2023-07-21 00:14:45,831 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:45,832 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:45,833 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 00:14:45,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 00:14:45,833 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 00:14:45,834 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 00:14:45,834 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 00:14:45,835 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 00:14:45,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:45,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 00:14:45,844 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 00:14:45,847 INFO [Listener at localhost/44727] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 00:14:45,848 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 00:14:45,849 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:45,850 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:45,851 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 00:14:45,852 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 00:14:45,855 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:45,855 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:45,856 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39747] to rsgroup master 2023-07-21 00:14:45,856 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39747 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 00:14:45,857 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] ipc.CallRunner(144): callId: 105 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:41140 deadline: 1689899685856, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39747 is either offline or it does not exist. 2023-07-21 00:14:45,857 WARN [Listener at localhost/44727] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39747 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39747 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 00:14:45,860 INFO [Listener at localhost/44727] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:45,861 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:45,861 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:45,861 INFO [Listener at localhost/44727] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35077, jenkins-hbase4.apache.org:36361, jenkins-hbase4.apache.org:36937, jenkins-hbase4.apache.org:41373], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 00:14:45,862 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 00:14:45,862 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:45,880 INFO [Listener at localhost/44727] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=569 (was 557) - Thread LEAK? -, OpenFileDescriptor=833 (was 821) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=497 (was 497), ProcessCount=174 (was 174), AvailableMemoryMB=2405 (was 2382) - AvailableMemoryMB LEAK? - 2023-07-21 00:14:45,880 WARN [Listener at localhost/44727] hbase.ResourceChecker(130): Thread=569 is superior to 500 2023-07-21 00:14:45,900 INFO [Listener at localhost/44727] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=569, OpenFileDescriptor=833, MaxFileDescriptor=60000, SystemLoadAverage=497, ProcessCount=174, AvailableMemoryMB=2409 2023-07-21 00:14:45,901 WARN [Listener at localhost/44727] hbase.ResourceChecker(130): Thread=569 is superior to 500 2023-07-21 00:14:45,901 INFO [Listener at localhost/44727] rsgroup.TestRSGroupsBase(132): testNonExistentTableMove 2023-07-21 00:14:45,905 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:45,905 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:45,906 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 00:14:45,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 00:14:45,906 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 00:14:45,906 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 00:14:45,907 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 00:14:45,907 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 00:14:45,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:45,911 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 00:14:45,912 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 00:14:45,915 INFO [Listener at localhost/44727] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 00:14:45,915 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 00:14:45,917 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:45,918 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:45,920 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 00:14:45,922 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 00:14:45,924 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:45,925 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:45,927 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39747] to rsgroup master 2023-07-21 00:14:45,927 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39747 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 00:14:45,927 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] ipc.CallRunner(144): callId: 133 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:41140 deadline: 1689899685927, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39747 is either offline or it does not exist. 2023-07-21 00:14:45,927 WARN [Listener at localhost/44727] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39747 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39747 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 00:14:45,929 INFO [Listener at localhost/44727] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:45,930 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:45,930 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:45,930 INFO [Listener at localhost/44727] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35077, jenkins-hbase4.apache.org:36361, jenkins-hbase4.apache.org:36937, jenkins-hbase4.apache.org:41373], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 00:14:45,931 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 00:14:45,931 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:45,932 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-21 00:14:45,932 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 00:14:45,933 INFO [Listener at localhost/44727] rsgroup.TestRSGroupsAdmin1(389): Moving table GrouptestNonExistentTableMove to default 2023-07-21 00:14:45,940 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-21 00:14:45,940 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-21 00:14:45,943 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:45,943 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:45,943 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 00:14:45,944 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 00:14:45,944 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 00:14:45,944 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 00:14:45,944 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 00:14:45,945 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 00:14:45,948 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:45,948 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 00:14:45,949 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 00:14:45,952 INFO [Listener at localhost/44727] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 00:14:45,952 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 00:14:45,954 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:45,954 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:45,956 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 00:14:45,958 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 00:14:45,961 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:45,961 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:45,963 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39747] to rsgroup master 2023-07-21 00:14:45,963 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39747 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 00:14:45,963 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] ipc.CallRunner(144): callId: 168 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:41140 deadline: 1689899685963, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39747 is either offline or it does not exist. 2023-07-21 00:14:45,963 WARN [Listener at localhost/44727] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39747 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39747 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 00:14:45,965 INFO [Listener at localhost/44727] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:45,966 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:45,966 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:45,966 INFO [Listener at localhost/44727] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35077, jenkins-hbase4.apache.org:36361, jenkins-hbase4.apache.org:36937, jenkins-hbase4.apache.org:41373], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 00:14:45,967 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 00:14:45,967 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:45,987 INFO [Listener at localhost/44727] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=571 (was 569) - Thread LEAK? -, OpenFileDescriptor=833 (was 833), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=497 (was 497), ProcessCount=174 (was 174), AvailableMemoryMB=2411 (was 2409) - AvailableMemoryMB LEAK? - 2023-07-21 00:14:45,987 WARN [Listener at localhost/44727] hbase.ResourceChecker(130): Thread=571 is superior to 500 2023-07-21 00:14:46,006 INFO [Listener at localhost/44727] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=571, OpenFileDescriptor=833, MaxFileDescriptor=60000, SystemLoadAverage=497, ProcessCount=174, AvailableMemoryMB=2410 2023-07-21 00:14:46,006 WARN [Listener at localhost/44727] hbase.ResourceChecker(130): Thread=571 is superior to 500 2023-07-21 00:14:46,006 INFO [Listener at localhost/44727] rsgroup.TestRSGroupsBase(132): testGroupInfoMultiAccessing 2023-07-21 00:14:46,010 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:46,010 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:46,011 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 00:14:46,011 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 00:14:46,012 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 00:14:46,012 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 00:14:46,012 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 00:14:46,013 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 00:14:46,016 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:46,017 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 00:14:46,018 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 00:14:46,021 INFO [Listener at localhost/44727] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 00:14:46,021 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 00:14:46,023 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:46,024 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:46,025 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 00:14:46,026 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 00:14:46,028 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:46,028 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:46,030 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39747] to rsgroup master 2023-07-21 00:14:46,031 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39747 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 00:14:46,031 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] ipc.CallRunner(144): callId: 196 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:41140 deadline: 1689899686030, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39747 is either offline or it does not exist. 2023-07-21 00:14:46,031 WARN [Listener at localhost/44727] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39747 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39747 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 00:14:46,033 INFO [Listener at localhost/44727] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:46,033 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:46,033 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:46,034 INFO [Listener at localhost/44727] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35077, jenkins-hbase4.apache.org:36361, jenkins-hbase4.apache.org:36937, jenkins-hbase4.apache.org:41373], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 00:14:46,034 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 00:14:46,034 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:46,037 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:46,037 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:46,038 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 00:14:46,038 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 00:14:46,038 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 00:14:46,039 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 00:14:46,039 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 00:14:46,040 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 00:14:46,043 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:46,043 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 00:14:46,049 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 00:14:46,052 INFO [Listener at localhost/44727] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 00:14:46,052 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 00:14:46,054 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:46,055 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:46,056 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 00:14:46,057 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 00:14:46,059 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:46,059 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:46,061 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39747] to rsgroup master 2023-07-21 00:14:46,061 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39747 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 00:14:46,061 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] ipc.CallRunner(144): callId: 224 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:41140 deadline: 1689899686061, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39747 is either offline or it does not exist. 2023-07-21 00:14:46,062 WARN [Listener at localhost/44727] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39747 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39747 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 00:14:46,064 INFO [Listener at localhost/44727] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:46,064 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:46,064 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:46,064 INFO [Listener at localhost/44727] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35077, jenkins-hbase4.apache.org:36361, jenkins-hbase4.apache.org:36937, jenkins-hbase4.apache.org:41373], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 00:14:46,065 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 00:14:46,065 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:46,084 INFO [Listener at localhost/44727] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=572 (was 571) - Thread LEAK? -, OpenFileDescriptor=833 (was 833), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=497 (was 497), ProcessCount=174 (was 174), AvailableMemoryMB=2426 (was 2410) - AvailableMemoryMB LEAK? - 2023-07-21 00:14:46,084 WARN [Listener at localhost/44727] hbase.ResourceChecker(130): Thread=572 is superior to 500 2023-07-21 00:14:46,103 INFO [Listener at localhost/44727] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=572, OpenFileDescriptor=833, MaxFileDescriptor=60000, SystemLoadAverage=497, ProcessCount=174, AvailableMemoryMB=2424 2023-07-21 00:14:46,103 WARN [Listener at localhost/44727] hbase.ResourceChecker(130): Thread=572 is superior to 500 2023-07-21 00:14:46,103 INFO [Listener at localhost/44727] rsgroup.TestRSGroupsBase(132): testNamespaceConstraint 2023-07-21 00:14:46,106 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:46,107 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:46,107 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 00:14:46,107 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 00:14:46,107 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 00:14:46,108 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 00:14:46,108 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 00:14:46,109 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 00:14:46,112 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:46,112 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 00:14:46,113 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 00:14:46,116 INFO [Listener at localhost/44727] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 00:14:46,116 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 00:14:46,118 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:46,118 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:46,120 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 00:14:46,122 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 00:14:46,123 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:46,123 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:46,125 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39747] to rsgroup master 2023-07-21 00:14:46,125 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39747 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 00:14:46,125 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] ipc.CallRunner(144): callId: 252 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:41140 deadline: 1689899686125, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39747 is either offline or it does not exist. 2023-07-21 00:14:46,126 WARN [Listener at localhost/44727] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39747 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39747 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 00:14:46,127 INFO [Listener at localhost/44727] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:46,128 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:46,128 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:46,128 INFO [Listener at localhost/44727] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35077, jenkins-hbase4.apache.org:36361, jenkins-hbase4.apache.org:36937, jenkins-hbase4.apache.org:41373], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 00:14:46,129 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 00:14:46,129 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:46,129 INFO [Listener at localhost/44727] rsgroup.TestRSGroupsAdmin1(154): testNamespaceConstraint 2023-07-21 00:14:46,129 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_foo 2023-07-21 00:14:46,131 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-21 00:14:46,132 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:46,133 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:46,133 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-21 00:14:46,134 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 00:14:46,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:46,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:46,138 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-21 00:14:46,139 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-21 00:14:46,142 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-21 00:14:46,146 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 00:14:46,153 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 9 msec 2023-07-21 00:14:46,243 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-21 00:14:46,244 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-21 00:14:46,246 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:504) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 00:14:46,246 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] ipc.CallRunner(144): callId: 268 service: MasterService methodName: ExecMasterService size: 91 connection: 172.31.14.131:41140 deadline: 1689899686244, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo 2023-07-21 00:14:46,251 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.HMaster$16(3053): Client=jenkins//172.31.14.131 modify {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-21 00:14:46,260 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=Group_foo 2023-07-21 00:14:46,266 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-21 00:14:46,268 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-21 00:14:46,269 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; ModifyNamespaceProcedure, namespace=Group_foo in 16 msec 2023-07-21 00:14:46,367 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-21 00:14:46,368 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_anotherGroup 2023-07-21 00:14:46,370 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-21 00:14:46,372 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:46,372 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-21 00:14:46,373 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:46,373 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-21 00:14:46,376 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 00:14:46,379 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:46,379 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:46,381 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete Group_foo 2023-07-21 00:14:46,381 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] procedure2.ProcedureExecutor(1029): Stored pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 00:14:46,383 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 00:14:46,386 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 00:14:46,386 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-21 00:14:46,388 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 00:14:46,389 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-21 00:14:46,389 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 00:14:46,389 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 00:14:46,391 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-21 00:14:46,392 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=22, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 10 msec 2023-07-21 00:14:46,487 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-21 00:14:46,488 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-21 00:14:46,491 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-21 00:14:46,491 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:46,491 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:46,492 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-21 00:14:46,494 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 00:14:46,496 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:46,496 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:46,498 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateNamespace(RSGroupAdminEndpoint.java:591) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:222) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateNamespace(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:3010) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:3007) at org.apache.hadoop.hbase.master.MasterRpcServices.createNamespace(MasterRpcServices.java:684) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 00:14:46,498 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] ipc.CallRunner(144): callId: 290 service: MasterService methodName: CreateNamespace size: 70 connection: 172.31.14.131:41140 deadline: 1689898546498, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. 2023-07-21 00:14:46,501 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:46,501 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:46,502 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 00:14:46,502 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 00:14:46,502 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 00:14:46,502 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 00:14:46,502 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 00:14:46,503 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_anotherGroup 2023-07-21 00:14:46,509 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:46,510 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:46,510 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-21 00:14:46,511 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 00:14:46,512 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-21 00:14:46,512 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-21 00:14:46,512 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-21 00:14:46,512 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-21 00:14:46,513 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-21 00:14:46,513 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-21 00:14:46,516 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:46,516 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-21 00:14:46,517 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-21 00:14:46,519 INFO [Listener at localhost/44727] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-21 00:14:46,520 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-21 00:14:46,521 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-21 00:14:46,522 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-21 00:14:46,523 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-21 00:14:46,525 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-21 00:14:46,527 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:46,527 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:46,529 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39747] to rsgroup master 2023-07-21 00:14:46,529 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39747 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 00:14:46,529 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] ipc.CallRunner(144): callId: 320 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:41140 deadline: 1689899686529, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39747 is either offline or it does not exist. 2023-07-21 00:14:46,529 WARN [Listener at localhost/44727] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39747 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:39747 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-21 00:14:46,531 INFO [Listener at localhost/44727] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-21 00:14:46,532 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-21 00:14:46,532 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-21 00:14:46,532 INFO [Listener at localhost/44727] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35077, jenkins-hbase4.apache.org:36361, jenkins-hbase4.apache.org:36937, jenkins-hbase4.apache.org:41373], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-21 00:14:46,533 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-21 00:14:46,533 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39747] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-21 00:14:46,553 INFO [Listener at localhost/44727] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=572 (was 572), OpenFileDescriptor=833 (was 833), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=497 (was 497), ProcessCount=174 (was 174), AvailableMemoryMB=3307 (was 2424) - AvailableMemoryMB LEAK? - 2023-07-21 00:14:46,553 WARN [Listener at localhost/44727] hbase.ResourceChecker(130): Thread=572 is superior to 500 2023-07-21 00:14:46,553 INFO [Listener at localhost/44727] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-21 00:14:46,553 INFO [Listener at localhost/44727] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-21 00:14:46,553 DEBUG [Listener at localhost/44727] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x03095489 to 127.0.0.1:57003 2023-07-21 00:14:46,553 DEBUG [Listener at localhost/44727] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 00:14:46,553 DEBUG [Listener at localhost/44727] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-21 00:14:46,553 DEBUG [Listener at localhost/44727] util.JVMClusterUtil(257): Found active master hash=826843123, stopped=false 2023-07-21 00:14:46,554 DEBUG [Listener at localhost/44727] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-21 00:14:46,554 DEBUG [Listener at localhost/44727] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-21 00:14:46,554 INFO [Listener at localhost/44727] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,39747,1689898482511 2023-07-21 00:14:46,555 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): regionserver:35077-0x101853afe3c0003, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 00:14:46,555 INFO [Listener at localhost/44727] procedure2.ProcedureExecutor(629): Stopping 2023-07-21 00:14:46,555 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 00:14:46,555 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 00:14:46,555 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): regionserver:41373-0x101853afe3c000b, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 00:14:46,555 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): regionserver:36937-0x101853afe3c0002, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 00:14:46,555 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): regionserver:36361-0x101853afe3c0001, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 00:14:46,556 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35077-0x101853afe3c0003, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 00:14:46,556 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 00:14:46,556 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41373-0x101853afe3c000b, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 00:14:46,556 DEBUG [Listener at localhost/44727] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x75cd413a to 127.0.0.1:57003 2023-07-21 00:14:46,556 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36937-0x101853afe3c0002, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 00:14:46,556 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36361-0x101853afe3c0001, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 00:14:46,556 DEBUG [Listener at localhost/44727] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 00:14:46,557 INFO [Listener at localhost/44727] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,36361,1689898482701' ***** 2023-07-21 00:14:46,557 INFO [Listener at localhost/44727] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 00:14:46,557 INFO [RS:0;jenkins-hbase4:36361] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 00:14:46,559 INFO [Listener at localhost/44727] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,36937,1689898482870' ***** 2023-07-21 00:14:46,560 INFO [Listener at localhost/44727] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 00:14:46,560 INFO [RS:1;jenkins-hbase4:36937] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 00:14:46,560 INFO [Listener at localhost/44727] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,35077,1689898483033' ***** 2023-07-21 00:14:46,560 INFO [Listener at localhost/44727] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 00:14:46,561 INFO [Listener at localhost/44727] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41373,1689898484358' ***** 2023-07-21 00:14:46,561 INFO [RS:2;jenkins-hbase4:35077] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 00:14:46,561 INFO [Listener at localhost/44727] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 00:14:46,562 INFO [RS:0;jenkins-hbase4:36361] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@8d4a78e{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 00:14:46,562 INFO [RS:3;jenkins-hbase4:41373] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 00:14:46,565 INFO [RS:0;jenkins-hbase4:36361] server.AbstractConnector(383): Stopped ServerConnector@521a0479{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 00:14:46,565 INFO [RS:1;jenkins-hbase4:36937] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@5dd8db29{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 00:14:46,565 INFO [RS:0;jenkins-hbase4:36361] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 00:14:46,567 INFO [RS:0;jenkins-hbase4:36361] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4cf17b35{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 00:14:46,567 INFO [RS:1;jenkins-hbase4:36937] server.AbstractConnector(383): Stopped ServerConnector@db0735{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 00:14:46,567 INFO [RS:2;jenkins-hbase4:35077] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2a25343d{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 00:14:46,568 INFO [RS:1;jenkins-hbase4:36937] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 00:14:46,568 INFO [RS:0;jenkins-hbase4:36361] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6e84d38b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/hadoop.log.dir/,STOPPED} 2023-07-21 00:14:46,568 INFO [RS:3;jenkins-hbase4:41373] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@387388f{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 00:14:46,569 INFO [RS:2;jenkins-hbase4:35077] server.AbstractConnector(383): Stopped ServerConnector@77e9a657{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 00:14:46,569 INFO [RS:2;jenkins-hbase4:35077] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 00:14:46,568 INFO [RS:1;jenkins-hbase4:36937] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6a39ee87{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 00:14:46,570 INFO [RS:0;jenkins-hbase4:36361] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 00:14:46,570 INFO [RS:2;jenkins-hbase4:35077] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@59d396e9{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 00:14:46,571 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 00:14:46,570 INFO [RS:3;jenkins-hbase4:41373] server.AbstractConnector(383): Stopped ServerConnector@7fed29c6{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 00:14:46,572 INFO [RS:2;jenkins-hbase4:35077] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3bddfc7b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/hadoop.log.dir/,STOPPED} 2023-07-21 00:14:46,571 INFO [RS:0;jenkins-hbase4:36361] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 00:14:46,571 INFO [RS:1;jenkins-hbase4:36937] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5b843864{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/hadoop.log.dir/,STOPPED} 2023-07-21 00:14:46,572 INFO [RS:0;jenkins-hbase4:36361] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 00:14:46,572 INFO [RS:3;jenkins-hbase4:41373] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 00:14:46,572 INFO [RS:0;jenkins-hbase4:36361] regionserver.HRegionServer(3305): Received CLOSE for 02a141934bb11deb09903fdd06e94126 2023-07-21 00:14:46,572 INFO [RS:2;jenkins-hbase4:35077] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 00:14:46,573 INFO [RS:0;jenkins-hbase4:36361] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,36361,1689898482701 2023-07-21 00:14:46,573 DEBUG [RS:0;jenkins-hbase4:36361] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7b1356a1 to 127.0.0.1:57003 2023-07-21 00:14:46,573 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 02a141934bb11deb09903fdd06e94126, disabling compactions & flushes 2023-07-21 00:14:46,573 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689898483779.02a141934bb11deb09903fdd06e94126. 2023-07-21 00:14:46,573 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689898483779.02a141934bb11deb09903fdd06e94126. 2023-07-21 00:14:46,573 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689898483779.02a141934bb11deb09903fdd06e94126. after waiting 0 ms 2023-07-21 00:14:46,573 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689898483779.02a141934bb11deb09903fdd06e94126. 2023-07-21 00:14:46,573 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 02a141934bb11deb09903fdd06e94126 1/1 column families, dataSize=267 B heapSize=904 B 2023-07-21 00:14:46,573 DEBUG [RS:0;jenkins-hbase4:36361] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 00:14:46,573 INFO [RS:0;jenkins-hbase4:36361] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-21 00:14:46,573 DEBUG [RS:0;jenkins-hbase4:36361] regionserver.HRegionServer(1478): Online Regions={02a141934bb11deb09903fdd06e94126=hbase:namespace,,1689898483779.02a141934bb11deb09903fdd06e94126.} 2023-07-21 00:14:46,574 INFO [RS:1;jenkins-hbase4:36937] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 00:14:46,574 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 00:14:46,574 INFO [RS:1;jenkins-hbase4:36937] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 00:14:46,573 INFO [RS:2;jenkins-hbase4:35077] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 00:14:46,575 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 00:14:46,575 INFO [RS:2;jenkins-hbase4:35077] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 00:14:46,575 INFO [RS:1;jenkins-hbase4:36937] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 00:14:46,574 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 00:14:46,574 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 00:14:46,575 INFO [RS:1;jenkins-hbase4:36937] regionserver.HRegionServer(3305): Received CLOSE for e3201ae0898cc64d046937666d6d312f 2023-07-21 00:14:46,575 INFO [RS:2;jenkins-hbase4:35077] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35077,1689898483033 2023-07-21 00:14:46,575 INFO [RS:1;jenkins-hbase4:36937] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,36937,1689898482870 2023-07-21 00:14:46,575 DEBUG [RS:2;jenkins-hbase4:35077] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0b6bf005 to 127.0.0.1:57003 2023-07-21 00:14:46,575 INFO [RS:3;jenkins-hbase4:41373] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@44357807{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 00:14:46,575 DEBUG [RS:0;jenkins-hbase4:36361] regionserver.HRegionServer(1504): Waiting on 02a141934bb11deb09903fdd06e94126 2023-07-21 00:14:46,575 DEBUG [RS:2;jenkins-hbase4:35077] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 00:14:46,575 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e3201ae0898cc64d046937666d6d312f, disabling compactions & flushes 2023-07-21 00:14:46,575 DEBUG [RS:1;jenkins-hbase4:36937] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x633d50a3 to 127.0.0.1:57003 2023-07-21 00:14:46,576 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689898483932.e3201ae0898cc64d046937666d6d312f. 2023-07-21 00:14:46,576 DEBUG [RS:1;jenkins-hbase4:36937] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 00:14:46,576 INFO [RS:2;jenkins-hbase4:35077] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35077,1689898483033; all regions closed. 2023-07-21 00:14:46,576 INFO [RS:3;jenkins-hbase4:41373] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@23520836{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/hadoop.log.dir/,STOPPED} 2023-07-21 00:14:46,576 INFO [RS:1;jenkins-hbase4:36937] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 00:14:46,576 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689898483932.e3201ae0898cc64d046937666d6d312f. 2023-07-21 00:14:46,576 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689898483932.e3201ae0898cc64d046937666d6d312f. after waiting 0 ms 2023-07-21 00:14:46,577 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689898483932.e3201ae0898cc64d046937666d6d312f. 2023-07-21 00:14:46,576 INFO [RS:1;jenkins-hbase4:36937] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 00:14:46,577 INFO [RS:1;jenkins-hbase4:36937] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 00:14:46,577 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing e3201ae0898cc64d046937666d6d312f 1/1 column families, dataSize=6.43 KB heapSize=10.63 KB 2023-07-21 00:14:46,577 INFO [RS:1;jenkins-hbase4:36937] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-21 00:14:46,577 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 00:14:46,577 INFO [RS:1;jenkins-hbase4:36937] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-21 00:14:46,577 DEBUG [RS:1;jenkins-hbase4:36937] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, e3201ae0898cc64d046937666d6d312f=hbase:rsgroup,,1689898483932.e3201ae0898cc64d046937666d6d312f.} 2023-07-21 00:14:46,577 DEBUG [RS:1;jenkins-hbase4:36937] regionserver.HRegionServer(1504): Waiting on 1588230740, e3201ae0898cc64d046937666d6d312f 2023-07-21 00:14:46,577 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 00:14:46,577 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 00:14:46,577 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 00:14:46,577 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 00:14:46,577 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 00:14:46,578 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.51 KB heapSize=8.81 KB 2023-07-21 00:14:46,578 INFO [RS:3;jenkins-hbase4:41373] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 00:14:46,578 INFO [RS:3;jenkins-hbase4:41373] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 00:14:46,578 INFO [RS:3;jenkins-hbase4:41373] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 00:14:46,578 INFO [RS:3;jenkins-hbase4:41373] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41373,1689898484358 2023-07-21 00:14:46,578 DEBUG [RS:3;jenkins-hbase4:41373] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x67aae9f8 to 127.0.0.1:57003 2023-07-21 00:14:46,578 DEBUG [RS:3;jenkins-hbase4:41373] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 00:14:46,578 INFO [RS:3;jenkins-hbase4:41373] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41373,1689898484358; all regions closed. 2023-07-21 00:14:46,578 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 00:14:46,584 DEBUG [RS:2;jenkins-hbase4:35077] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/oldWALs 2023-07-21 00:14:46,584 INFO [RS:2;jenkins-hbase4:35077] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C35077%2C1689898483033:(num 1689898483605) 2023-07-21 00:14:46,584 DEBUG [RS:2;jenkins-hbase4:35077] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 00:14:46,584 INFO [RS:2;jenkins-hbase4:35077] regionserver.LeaseManager(133): Closed leases 2023-07-21 00:14:46,584 INFO [RS:2;jenkins-hbase4:35077] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 00:14:46,585 INFO [RS:2;jenkins-hbase4:35077] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 00:14:46,585 INFO [RS:2;jenkins-hbase4:35077] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 00:14:46,585 INFO [RS:2;jenkins-hbase4:35077] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 00:14:46,585 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 00:14:46,586 INFO [RS:2;jenkins-hbase4:35077] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35077 2023-07-21 00:14:46,596 DEBUG [RS:3;jenkins-hbase4:41373] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/oldWALs 2023-07-21 00:14:46,596 INFO [RS:3;jenkins-hbase4:41373] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41373%2C1689898484358:(num 1689898484673) 2023-07-21 00:14:46,596 DEBUG [RS:3;jenkins-hbase4:41373] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 00:14:46,596 INFO [RS:3;jenkins-hbase4:41373] regionserver.LeaseManager(133): Closed leases 2023-07-21 00:14:46,599 INFO [RS:3;jenkins-hbase4:41373] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 00:14:46,599 INFO [RS:3;jenkins-hbase4:41373] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 00:14:46,599 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 00:14:46,599 INFO [RS:3;jenkins-hbase4:41373] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 00:14:46,599 INFO [RS:3;jenkins-hbase4:41373] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 00:14:46,600 INFO [RS:3;jenkins-hbase4:41373] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41373 2023-07-21 00:14:46,611 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=267 B at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/hbase/namespace/02a141934bb11deb09903fdd06e94126/.tmp/info/eb769d7a3f0d4392bbd081fbe4947f10 2023-07-21 00:14:46,615 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.01 KB at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/hbase/meta/1588230740/.tmp/info/1ea35b85be1d4daba6124c17548380a0 2023-07-21 00:14:46,618 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for eb769d7a3f0d4392bbd081fbe4947f10 2023-07-21 00:14:46,619 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/hbase/namespace/02a141934bb11deb09903fdd06e94126/.tmp/info/eb769d7a3f0d4392bbd081fbe4947f10 as hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/hbase/namespace/02a141934bb11deb09903fdd06e94126/info/eb769d7a3f0d4392bbd081fbe4947f10 2023-07-21 00:14:46,620 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.43 KB at sequenceid=29 (bloomFilter=true), to=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/hbase/rsgroup/e3201ae0898cc64d046937666d6d312f/.tmp/m/70edc664e2c5453282f734c39a081543 2023-07-21 00:14:46,623 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1ea35b85be1d4daba6124c17548380a0 2023-07-21 00:14:46,626 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for eb769d7a3f0d4392bbd081fbe4947f10 2023-07-21 00:14:46,626 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 70edc664e2c5453282f734c39a081543 2023-07-21 00:14:46,626 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/hbase/namespace/02a141934bb11deb09903fdd06e94126/info/eb769d7a3f0d4392bbd081fbe4947f10, entries=3, sequenceid=9, filesize=5.0 K 2023-07-21 00:14:46,627 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/hbase/rsgroup/e3201ae0898cc64d046937666d6d312f/.tmp/m/70edc664e2c5453282f734c39a081543 as hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/hbase/rsgroup/e3201ae0898cc64d046937666d6d312f/m/70edc664e2c5453282f734c39a081543 2023-07-21 00:14:46,627 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~267 B/267, heapSize ~888 B/888, currentSize=0 B/0 for 02a141934bb11deb09903fdd06e94126 in 54ms, sequenceid=9, compaction requested=false 2023-07-21 00:14:46,635 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/hbase/namespace/02a141934bb11deb09903fdd06e94126/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-21 00:14:46,636 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 70edc664e2c5453282f734c39a081543 2023-07-21 00:14:46,636 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/hbase/rsgroup/e3201ae0898cc64d046937666d6d312f/m/70edc664e2c5453282f734c39a081543, entries=12, sequenceid=29, filesize=5.4 K 2023-07-21 00:14:46,637 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.43 KB/6586, heapSize ~10.61 KB/10864, currentSize=0 B/0 for e3201ae0898cc64d046937666d6d312f in 60ms, sequenceid=29, compaction requested=false 2023-07-21 00:14:46,643 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689898483779.02a141934bb11deb09903fdd06e94126. 2023-07-21 00:14:46,643 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 02a141934bb11deb09903fdd06e94126: 2023-07-21 00:14:46,643 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689898483779.02a141934bb11deb09903fdd06e94126. 2023-07-21 00:14:46,650 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 00:14:46,652 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=82 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/hbase/meta/1588230740/.tmp/rep_barrier/4f6b3dd7c5cd4037941f669745bd45a6 2023-07-21 00:14:46,653 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/hbase/rsgroup/e3201ae0898cc64d046937666d6d312f/recovered.edits/32.seqid, newMaxSeqId=32, maxSeqId=1 2023-07-21 00:14:46,654 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 00:14:46,655 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689898483932.e3201ae0898cc64d046937666d6d312f. 2023-07-21 00:14:46,655 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e3201ae0898cc64d046937666d6d312f: 2023-07-21 00:14:46,655 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689898483932.e3201ae0898cc64d046937666d6d312f. 2023-07-21 00:14:46,664 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4f6b3dd7c5cd4037941f669745bd45a6 2023-07-21 00:14:46,680 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=428 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/hbase/meta/1588230740/.tmp/table/852d4c8a3b9741f6b289d667dff51c3c 2023-07-21 00:14:46,684 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 00:14:46,684 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): regionserver:41373-0x101853afe3c000b, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35077,1689898483033 2023-07-21 00:14:46,684 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): regionserver:41373-0x101853afe3c000b, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 00:14:46,684 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): regionserver:41373-0x101853afe3c000b, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41373,1689898484358 2023-07-21 00:14:46,684 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): regionserver:36937-0x101853afe3c0002, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35077,1689898483033 2023-07-21 00:14:46,684 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): regionserver:36937-0x101853afe3c0002, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 00:14:46,684 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): regionserver:36937-0x101853afe3c0002, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41373,1689898484358 2023-07-21 00:14:46,684 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): regionserver:36361-0x101853afe3c0001, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35077,1689898483033 2023-07-21 00:14:46,684 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): regionserver:36361-0x101853afe3c0001, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 00:14:46,684 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): regionserver:36361-0x101853afe3c0001, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41373,1689898484358 2023-07-21 00:14:46,684 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): regionserver:35077-0x101853afe3c0003, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35077,1689898483033 2023-07-21 00:14:46,684 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): regionserver:35077-0x101853afe3c0003, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 00:14:46,684 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): regionserver:35077-0x101853afe3c0003, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41373,1689898484358 2023-07-21 00:14:46,684 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,35077,1689898483033] 2023-07-21 00:14:46,685 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,35077,1689898483033; numProcessing=1 2023-07-21 00:14:46,686 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,35077,1689898483033 already deleted, retry=false 2023-07-21 00:14:46,686 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,35077,1689898483033 expired; onlineServers=3 2023-07-21 00:14:46,686 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,41373,1689898484358] 2023-07-21 00:14:46,686 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,41373,1689898484358; numProcessing=2 2023-07-21 00:14:46,687 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,41373,1689898484358 already deleted, retry=false 2023-07-21 00:14:46,687 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,41373,1689898484358 expired; onlineServers=2 2023-07-21 00:14:46,690 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 852d4c8a3b9741f6b289d667dff51c3c 2023-07-21 00:14:46,691 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/hbase/meta/1588230740/.tmp/info/1ea35b85be1d4daba6124c17548380a0 as hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/hbase/meta/1588230740/info/1ea35b85be1d4daba6124c17548380a0 2023-07-21 00:14:46,697 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1ea35b85be1d4daba6124c17548380a0 2023-07-21 00:14:46,698 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/hbase/meta/1588230740/info/1ea35b85be1d4daba6124c17548380a0, entries=22, sequenceid=26, filesize=7.3 K 2023-07-21 00:14:46,704 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/hbase/meta/1588230740/.tmp/rep_barrier/4f6b3dd7c5cd4037941f669745bd45a6 as hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/hbase/meta/1588230740/rep_barrier/4f6b3dd7c5cd4037941f669745bd45a6 2023-07-21 00:14:46,710 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4f6b3dd7c5cd4037941f669745bd45a6 2023-07-21 00:14:46,710 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/hbase/meta/1588230740/rep_barrier/4f6b3dd7c5cd4037941f669745bd45a6, entries=1, sequenceid=26, filesize=4.9 K 2023-07-21 00:14:46,711 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/hbase/meta/1588230740/.tmp/table/852d4c8a3b9741f6b289d667dff51c3c as hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/hbase/meta/1588230740/table/852d4c8a3b9741f6b289d667dff51c3c 2023-07-21 00:14:46,717 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 852d4c8a3b9741f6b289d667dff51c3c 2023-07-21 00:14:46,717 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/hbase/meta/1588230740/table/852d4c8a3b9741f6b289d667dff51c3c, entries=6, sequenceid=26, filesize=5.1 K 2023-07-21 00:14:46,718 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.51 KB/4614, heapSize ~8.77 KB/8976, currentSize=0 B/0 for 1588230740 in 141ms, sequenceid=26, compaction requested=false 2023-07-21 00:14:46,732 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/data/hbase/meta/1588230740/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-21 00:14:46,733 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 00:14:46,734 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 00:14:46,734 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 00:14:46,734 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-21 00:14:46,776 INFO [RS:0;jenkins-hbase4:36361] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,36361,1689898482701; all regions closed. 2023-07-21 00:14:46,777 INFO [RS:1;jenkins-hbase4:36937] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,36937,1689898482870; all regions closed. 2023-07-21 00:14:46,782 DEBUG [RS:0;jenkins-hbase4:36361] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/oldWALs 2023-07-21 00:14:46,782 INFO [RS:0;jenkins-hbase4:36361] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C36361%2C1689898482701:(num 1689898483600) 2023-07-21 00:14:46,782 DEBUG [RS:0;jenkins-hbase4:36361] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 00:14:46,782 INFO [RS:0;jenkins-hbase4:36361] regionserver.LeaseManager(133): Closed leases 2023-07-21 00:14:46,782 INFO [RS:0;jenkins-hbase4:36361] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 00:14:46,783 INFO [RS:0;jenkins-hbase4:36361] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 00:14:46,783 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 00:14:46,783 INFO [RS:0;jenkins-hbase4:36361] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 00:14:46,783 INFO [RS:0;jenkins-hbase4:36361] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 00:14:46,784 INFO [RS:0;jenkins-hbase4:36361] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:36361 2023-07-21 00:14:46,785 DEBUG [RS:1;jenkins-hbase4:36937] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/oldWALs 2023-07-21 00:14:46,785 INFO [RS:1;jenkins-hbase4:36937] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C36937%2C1689898482870.meta:.meta(num 1689898483695) 2023-07-21 00:14:46,786 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 00:14:46,786 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): regionserver:36361-0x101853afe3c0001, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36361,1689898482701 2023-07-21 00:14:46,786 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): regionserver:36937-0x101853afe3c0002, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36361,1689898482701 2023-07-21 00:14:46,787 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,36361,1689898482701] 2023-07-21 00:14:46,788 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,36361,1689898482701; numProcessing=3 2023-07-21 00:14:46,789 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,36361,1689898482701 already deleted, retry=false 2023-07-21 00:14:46,789 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,36361,1689898482701 expired; onlineServers=1 2023-07-21 00:14:46,790 DEBUG [RS:1;jenkins-hbase4:36937] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/oldWALs 2023-07-21 00:14:46,790 INFO [RS:1;jenkins-hbase4:36937] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C36937%2C1689898482870:(num 1689898483609) 2023-07-21 00:14:46,790 DEBUG [RS:1;jenkins-hbase4:36937] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 00:14:46,790 INFO [RS:1;jenkins-hbase4:36937] regionserver.LeaseManager(133): Closed leases 2023-07-21 00:14:46,790 INFO [RS:1;jenkins-hbase4:36937] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 00:14:46,790 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 00:14:46,791 INFO [RS:1;jenkins-hbase4:36937] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:36937 2023-07-21 00:14:46,793 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): regionserver:36937-0x101853afe3c0002, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36937,1689898482870 2023-07-21 00:14:46,793 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 00:14:46,800 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,36937,1689898482870] 2023-07-21 00:14:46,800 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,36937,1689898482870; numProcessing=4 2023-07-21 00:14:46,801 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,36937,1689898482870 already deleted, retry=false 2023-07-21 00:14:46,801 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,36937,1689898482870 expired; onlineServers=0 2023-07-21 00:14:46,801 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,39747,1689898482511' ***** 2023-07-21 00:14:46,801 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-21 00:14:46,802 DEBUG [M:0;jenkins-hbase4:39747] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@35771d02, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-21 00:14:46,802 INFO [M:0;jenkins-hbase4:39747] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 00:14:46,804 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-21 00:14:46,804 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 00:14:46,804 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 00:14:46,804 INFO [M:0;jenkins-hbase4:39747] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@5d66f18d{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-21 00:14:46,805 INFO [M:0;jenkins-hbase4:39747] server.AbstractConnector(383): Stopped ServerConnector@44888ac3{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 00:14:46,805 INFO [M:0;jenkins-hbase4:39747] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 00:14:46,805 INFO [M:0;jenkins-hbase4:39747] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@746ec252{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 00:14:46,806 INFO [M:0;jenkins-hbase4:39747] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6483b7a2{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/hadoop.log.dir/,STOPPED} 2023-07-21 00:14:46,806 INFO [M:0;jenkins-hbase4:39747] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,39747,1689898482511 2023-07-21 00:14:46,806 INFO [M:0;jenkins-hbase4:39747] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,39747,1689898482511; all regions closed. 2023-07-21 00:14:46,806 DEBUG [M:0;jenkins-hbase4:39747] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 00:14:46,806 INFO [M:0;jenkins-hbase4:39747] master.HMaster(1491): Stopping master jetty server 2023-07-21 00:14:46,807 INFO [M:0;jenkins-hbase4:39747] server.AbstractConnector(383): Stopped ServerConnector@346e13ed{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 00:14:46,808 DEBUG [M:0;jenkins-hbase4:39747] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-21 00:14:46,808 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-21 00:14:46,808 DEBUG [M:0;jenkins-hbase4:39747] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-21 00:14:46,808 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689898483330] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689898483330,5,FailOnTimeoutGroup] 2023-07-21 00:14:46,808 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689898483330] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689898483330,5,FailOnTimeoutGroup] 2023-07-21 00:14:46,808 INFO [M:0;jenkins-hbase4:39747] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-21 00:14:46,808 INFO [M:0;jenkins-hbase4:39747] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-21 00:14:46,808 INFO [M:0;jenkins-hbase4:39747] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-21 00:14:46,808 DEBUG [M:0;jenkins-hbase4:39747] master.HMaster(1512): Stopping service threads 2023-07-21 00:14:46,808 INFO [M:0;jenkins-hbase4:39747] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-21 00:14:46,808 ERROR [M:0;jenkins-hbase4:39747] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-21 00:14:46,808 INFO [M:0;jenkins-hbase4:39747] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-21 00:14:46,808 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-21 00:14:46,809 DEBUG [M:0;jenkins-hbase4:39747] zookeeper.ZKUtil(398): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-21 00:14:46,809 WARN [M:0;jenkins-hbase4:39747] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-21 00:14:46,809 INFO [M:0;jenkins-hbase4:39747] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-21 00:14:46,809 INFO [M:0;jenkins-hbase4:39747] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-21 00:14:46,809 DEBUG [M:0;jenkins-hbase4:39747] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-21 00:14:46,809 INFO [M:0;jenkins-hbase4:39747] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 00:14:46,809 DEBUG [M:0;jenkins-hbase4:39747] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 00:14:46,809 DEBUG [M:0;jenkins-hbase4:39747] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-21 00:14:46,809 DEBUG [M:0;jenkins-hbase4:39747] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 00:14:46,809 INFO [M:0;jenkins-hbase4:39747] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=76.23 KB heapSize=90.66 KB 2023-07-21 00:14:46,821 INFO [M:0;jenkins-hbase4:39747] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=76.23 KB at sequenceid=175 (bloomFilter=true), to=hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/3d0aed3d7cfd46cf96e869ba9111d3f7 2023-07-21 00:14:46,826 DEBUG [M:0;jenkins-hbase4:39747] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/3d0aed3d7cfd46cf96e869ba9111d3f7 as hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/3d0aed3d7cfd46cf96e869ba9111d3f7 2023-07-21 00:14:46,830 INFO [M:0;jenkins-hbase4:39747] regionserver.HStore(1080): Added hdfs://localhost:40339/user/jenkins/test-data/c40bc9f1-3747-a266-0aa1-93ecc590a49b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/3d0aed3d7cfd46cf96e869ba9111d3f7, entries=22, sequenceid=175, filesize=11.1 K 2023-07-21 00:14:46,831 INFO [M:0;jenkins-hbase4:39747] regionserver.HRegion(2948): Finished flush of dataSize ~76.23 KB/78061, heapSize ~90.65 KB/92824, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 22ms, sequenceid=175, compaction requested=false 2023-07-21 00:14:46,833 INFO [M:0;jenkins-hbase4:39747] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 00:14:46,833 DEBUG [M:0;jenkins-hbase4:39747] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 00:14:46,839 INFO [M:0;jenkins-hbase4:39747] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-21 00:14:46,839 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 00:14:46,840 INFO [M:0;jenkins-hbase4:39747] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:39747 2023-07-21 00:14:46,841 DEBUG [M:0;jenkins-hbase4:39747] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,39747,1689898482511 already deleted, retry=false 2023-07-21 00:14:47,156 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 00:14:47,156 INFO [M:0;jenkins-hbase4:39747] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,39747,1689898482511; zookeeper connection closed. 2023-07-21 00:14:47,156 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): master:39747-0x101853afe3c0000, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 00:14:47,256 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): regionserver:36937-0x101853afe3c0002, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 00:14:47,257 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): regionserver:36937-0x101853afe3c0002, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 00:14:47,257 INFO [RS:1;jenkins-hbase4:36937] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,36937,1689898482870; zookeeper connection closed. 2023-07-21 00:14:47,257 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@a34f6f5] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@a34f6f5 2023-07-21 00:14:47,357 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): regionserver:36361-0x101853afe3c0001, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 00:14:47,357 INFO [RS:0;jenkins-hbase4:36361] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,36361,1689898482701; zookeeper connection closed. 2023-07-21 00:14:47,357 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): regionserver:36361-0x101853afe3c0001, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 00:14:47,357 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@14e7114] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@14e7114 2023-07-21 00:14:47,457 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): regionserver:35077-0x101853afe3c0003, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 00:14:47,457 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): regionserver:35077-0x101853afe3c0003, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 00:14:47,457 INFO [RS:2;jenkins-hbase4:35077] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35077,1689898483033; zookeeper connection closed. 2023-07-21 00:14:47,458 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@422459b6] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@422459b6 2023-07-21 00:14:47,557 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): regionserver:41373-0x101853afe3c000b, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 00:14:47,557 INFO [RS:3;jenkins-hbase4:41373] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41373,1689898484358; zookeeper connection closed. 2023-07-21 00:14:47,558 DEBUG [Listener at localhost/44727-EventThread] zookeeper.ZKWatcher(600): regionserver:41373-0x101853afe3c000b, quorum=127.0.0.1:57003, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 00:14:47,558 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@387dad3d] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@387dad3d 2023-07-21 00:14:47,558 INFO [Listener at localhost/44727] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-21 00:14:47,558 WARN [Listener at localhost/44727] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 00:14:47,562 INFO [Listener at localhost/44727] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 00:14:47,665 WARN [BP-525384191-172.31.14.131-1689898481590 heartbeating to localhost/127.0.0.1:40339] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 00:14:47,665 WARN [BP-525384191-172.31.14.131-1689898481590 heartbeating to localhost/127.0.0.1:40339] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-525384191-172.31.14.131-1689898481590 (Datanode Uuid 18cc931e-84ea-4e0c-86e4-57340b8c0d1a) service to localhost/127.0.0.1:40339 2023-07-21 00:14:47,666 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/cluster_5ed5401b-14a3-7f60-6a0c-e81bd32f3c30/dfs/data/data5/current/BP-525384191-172.31.14.131-1689898481590] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 00:14:47,666 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/cluster_5ed5401b-14a3-7f60-6a0c-e81bd32f3c30/dfs/data/data6/current/BP-525384191-172.31.14.131-1689898481590] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 00:14:47,667 WARN [Listener at localhost/44727] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 00:14:47,671 INFO [Listener at localhost/44727] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 00:14:47,773 WARN [BP-525384191-172.31.14.131-1689898481590 heartbeating to localhost/127.0.0.1:40339] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 00:14:47,773 WARN [BP-525384191-172.31.14.131-1689898481590 heartbeating to localhost/127.0.0.1:40339] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-525384191-172.31.14.131-1689898481590 (Datanode Uuid 06c39073-d971-4162-910a-8bde4402365a) service to localhost/127.0.0.1:40339 2023-07-21 00:14:47,774 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/cluster_5ed5401b-14a3-7f60-6a0c-e81bd32f3c30/dfs/data/data3/current/BP-525384191-172.31.14.131-1689898481590] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 00:14:47,774 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/cluster_5ed5401b-14a3-7f60-6a0c-e81bd32f3c30/dfs/data/data4/current/BP-525384191-172.31.14.131-1689898481590] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 00:14:47,775 WARN [Listener at localhost/44727] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 00:14:47,778 INFO [Listener at localhost/44727] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 00:14:47,880 WARN [BP-525384191-172.31.14.131-1689898481590 heartbeating to localhost/127.0.0.1:40339] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 00:14:47,880 WARN [BP-525384191-172.31.14.131-1689898481590 heartbeating to localhost/127.0.0.1:40339] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-525384191-172.31.14.131-1689898481590 (Datanode Uuid 6a459a30-7fc6-436c-b2ca-e7fe04c1f1dc) service to localhost/127.0.0.1:40339 2023-07-21 00:14:47,881 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/cluster_5ed5401b-14a3-7f60-6a0c-e81bd32f3c30/dfs/data/data1/current/BP-525384191-172.31.14.131-1689898481590] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 00:14:47,882 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/408c7a81-30d4-0d54-c83d-358f8f2f1198/cluster_5ed5401b-14a3-7f60-6a0c-e81bd32f3c30/dfs/data/data2/current/BP-525384191-172.31.14.131-1689898481590] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 00:14:47,892 INFO [Listener at localhost/44727] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 00:14:48,014 INFO [Listener at localhost/44727] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-21 00:14:48,067 INFO [Listener at localhost/44727] hbase.HBaseTestingUtility(1293): Minicluster is down