2023-07-24 23:10:13,584 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6 2023-07-24 23:10:13,601 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1 timeout: 13 mins 2023-07-24 23:10:13,624 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-24 23:10:13,624 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/cluster_a5dd96c0-6e83-fd59-c008-51f91e0cf7a8, deleteOnExit=true 2023-07-24 23:10:13,625 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-24 23:10:13,625 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/test.cache.data in system properties and HBase conf 2023-07-24 23:10:13,626 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/hadoop.tmp.dir in system properties and HBase conf 2023-07-24 23:10:13,626 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/hadoop.log.dir in system properties and HBase conf 2023-07-24 23:10:13,627 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-24 23:10:13,627 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-24 23:10:13,627 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-24 23:10:13,736 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-24 23:10:14,134 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-24 23:10:14,138 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-24 23:10:14,138 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-24 23:10:14,139 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-24 23:10:14,139 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-24 23:10:14,139 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-24 23:10:14,140 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-24 23:10:14,140 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-24 23:10:14,140 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-24 23:10:14,141 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-24 23:10:14,141 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/nfs.dump.dir in system properties and HBase conf 2023-07-24 23:10:14,141 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/java.io.tmpdir in system properties and HBase conf 2023-07-24 23:10:14,141 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-24 23:10:14,142 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-24 23:10:14,142 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-24 23:10:14,658 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-24 23:10:14,663 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-24 23:10:14,971 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-24 23:10:15,172 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-24 23:10:15,194 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 23:10:15,229 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 23:10:15,269 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/java.io.tmpdir/Jetty_localhost_38633_hdfs____1ubdx7/webapp 2023-07-24 23:10:15,421 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38633 2023-07-24 23:10:15,432 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-24 23:10:15,433 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-24 23:10:15,845 WARN [Listener at localhost/38733] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 23:10:15,910 WARN [Listener at localhost/38733] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 23:10:15,931 WARN [Listener at localhost/38733] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 23:10:15,938 INFO [Listener at localhost/38733] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 23:10:15,944 INFO [Listener at localhost/38733] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/java.io.tmpdir/Jetty_localhost_37889_datanode____kjwxbw/webapp 2023-07-24 23:10:16,086 INFO [Listener at localhost/38733] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37889 2023-07-24 23:10:16,642 WARN [Listener at localhost/46381] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 23:10:16,665 WARN [Listener at localhost/46381] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 23:10:16,670 WARN [Listener at localhost/46381] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 23:10:16,673 INFO [Listener at localhost/46381] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 23:10:16,679 INFO [Listener at localhost/46381] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/java.io.tmpdir/Jetty_localhost_33037_datanode____.ml8t7p/webapp 2023-07-24 23:10:16,821 INFO [Listener at localhost/46381] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33037 2023-07-24 23:10:16,846 WARN [Listener at localhost/39739] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 23:10:16,980 WARN [Listener at localhost/39739] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 23:10:16,988 WARN [Listener at localhost/39739] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 23:10:16,990 INFO [Listener at localhost/39739] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 23:10:17,004 INFO [Listener at localhost/39739] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/java.io.tmpdir/Jetty_localhost_45291_datanode____bpt0r4/webapp 2023-07-24 23:10:17,170 INFO [Listener at localhost/39739] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45291 2023-07-24 23:10:17,203 WARN [Listener at localhost/39785] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 23:10:17,382 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x424c2351a0c264a7: Processing first storage report for DS-99f991c9-beb0-41c1-9404-df7150cba31b from datanode 97cfdb7a-ec5a-4873-9369-1379102e7245 2023-07-24 23:10:17,384 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x424c2351a0c264a7: from storage DS-99f991c9-beb0-41c1-9404-df7150cba31b node DatanodeRegistration(127.0.0.1:46677, datanodeUuid=97cfdb7a-ec5a-4873-9369-1379102e7245, infoPort=46149, infoSecurePort=0, ipcPort=39785, storageInfo=lv=-57;cid=testClusterID;nsid=127754423;c=1690240214735), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-07-24 23:10:17,384 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xfd4af45aa31fbf6b: Processing first storage report for DS-8317c52f-8ef5-4f17-a0c2-afb6962c43fa from datanode de6bb3d1-2617-46d5-bec0-6ddb8e268b79 2023-07-24 23:10:17,384 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xfd4af45aa31fbf6b: from storage DS-8317c52f-8ef5-4f17-a0c2-afb6962c43fa node DatanodeRegistration(127.0.0.1:39741, datanodeUuid=de6bb3d1-2617-46d5-bec0-6ddb8e268b79, infoPort=42653, infoSecurePort=0, ipcPort=46381, storageInfo=lv=-57;cid=testClusterID;nsid=127754423;c=1690240214735), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-24 23:10:17,385 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x4a5baa80584271c3: Processing first storage report for DS-ea853878-8ff0-4830-8e4e-e0b850d87b95 from datanode 033ae720-e48d-4c5d-a692-6b037d8757b2 2023-07-24 23:10:17,385 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x4a5baa80584271c3: from storage DS-ea853878-8ff0-4830-8e4e-e0b850d87b95 node DatanodeRegistration(127.0.0.1:46461, datanodeUuid=033ae720-e48d-4c5d-a692-6b037d8757b2, infoPort=46165, infoSecurePort=0, ipcPort=39739, storageInfo=lv=-57;cid=testClusterID;nsid=127754423;c=1690240214735), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 23:10:17,385 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x424c2351a0c264a7: Processing first storage report for DS-447c5eec-e105-4427-83eb-71ce2acea1d4 from datanode 97cfdb7a-ec5a-4873-9369-1379102e7245 2023-07-24 23:10:17,385 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x424c2351a0c264a7: from storage DS-447c5eec-e105-4427-83eb-71ce2acea1d4 node DatanodeRegistration(127.0.0.1:46677, datanodeUuid=97cfdb7a-ec5a-4873-9369-1379102e7245, infoPort=46149, infoSecurePort=0, ipcPort=39785, storageInfo=lv=-57;cid=testClusterID;nsid=127754423;c=1690240214735), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 23:10:17,385 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xfd4af45aa31fbf6b: Processing first storage report for DS-39aae5be-e841-4584-96ec-8e9b191b11ea from datanode de6bb3d1-2617-46d5-bec0-6ddb8e268b79 2023-07-24 23:10:17,385 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xfd4af45aa31fbf6b: from storage DS-39aae5be-e841-4584-96ec-8e9b191b11ea node DatanodeRegistration(127.0.0.1:39741, datanodeUuid=de6bb3d1-2617-46d5-bec0-6ddb8e268b79, infoPort=42653, infoSecurePort=0, ipcPort=46381, storageInfo=lv=-57;cid=testClusterID;nsid=127754423;c=1690240214735), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 23:10:17,385 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x4a5baa80584271c3: Processing first storage report for DS-ca01b1b2-d1c0-4b34-b051-4bc16380fcd5 from datanode 033ae720-e48d-4c5d-a692-6b037d8757b2 2023-07-24 23:10:17,385 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x4a5baa80584271c3: from storage DS-ca01b1b2-d1c0-4b34-b051-4bc16380fcd5 node DatanodeRegistration(127.0.0.1:46461, datanodeUuid=033ae720-e48d-4c5d-a692-6b037d8757b2, infoPort=46165, infoSecurePort=0, ipcPort=39739, storageInfo=lv=-57;cid=testClusterID;nsid=127754423;c=1690240214735), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-24 23:10:17,623 DEBUG [Listener at localhost/39785] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6 2023-07-24 23:10:17,732 INFO [Listener at localhost/39785] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/cluster_a5dd96c0-6e83-fd59-c008-51f91e0cf7a8/zookeeper_0, clientPort=59310, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/cluster_a5dd96c0-6e83-fd59-c008-51f91e0cf7a8/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/cluster_a5dd96c0-6e83-fd59-c008-51f91e0cf7a8/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-24 23:10:17,749 INFO [Listener at localhost/39785] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=59310 2023-07-24 23:10:17,761 INFO [Listener at localhost/39785] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 23:10:17,763 INFO [Listener at localhost/39785] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 23:10:18,430 INFO [Listener at localhost/39785] util.FSUtils(471): Created version file at hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c with version=8 2023-07-24 23:10:18,430 INFO [Listener at localhost/39785] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/hbase-staging 2023-07-24 23:10:18,438 DEBUG [Listener at localhost/39785] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-24 23:10:18,438 DEBUG [Listener at localhost/39785] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-24 23:10:18,438 DEBUG [Listener at localhost/39785] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-24 23:10:18,438 DEBUG [Listener at localhost/39785] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-24 23:10:18,805 INFO [Listener at localhost/39785] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-24 23:10:19,334 INFO [Listener at localhost/39785] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 23:10:19,373 INFO [Listener at localhost/39785] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 23:10:19,374 INFO [Listener at localhost/39785] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 23:10:19,374 INFO [Listener at localhost/39785] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 23:10:19,374 INFO [Listener at localhost/39785] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 23:10:19,375 INFO [Listener at localhost/39785] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 23:10:19,530 INFO [Listener at localhost/39785] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 23:10:19,607 DEBUG [Listener at localhost/39785] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-24 23:10:19,702 INFO [Listener at localhost/39785] ipc.NettyRpcServer(120): Bind to /172.31.14.131:42959 2023-07-24 23:10:19,713 INFO [Listener at localhost/39785] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 23:10:19,715 INFO [Listener at localhost/39785] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 23:10:19,737 INFO [Listener at localhost/39785] zookeeper.RecoverableZooKeeper(93): Process identifier=master:42959 connecting to ZooKeeper ensemble=127.0.0.1:59310 2023-07-24 23:10:19,793 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): master:429590x0, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 23:10:19,796 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:42959-0x1019999755d0000 connected 2023-07-24 23:10:19,834 DEBUG [Listener at localhost/39785] zookeeper.ZKUtil(164): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 23:10:19,835 DEBUG [Listener at localhost/39785] zookeeper.ZKUtil(164): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 23:10:19,839 DEBUG [Listener at localhost/39785] zookeeper.ZKUtil(164): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 23:10:19,848 DEBUG [Listener at localhost/39785] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42959 2023-07-24 23:10:19,848 DEBUG [Listener at localhost/39785] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42959 2023-07-24 23:10:19,849 DEBUG [Listener at localhost/39785] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42959 2023-07-24 23:10:19,850 DEBUG [Listener at localhost/39785] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42959 2023-07-24 23:10:19,850 DEBUG [Listener at localhost/39785] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42959 2023-07-24 23:10:19,885 INFO [Listener at localhost/39785] log.Log(170): Logging initialized @7189ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-24 23:10:20,027 INFO [Listener at localhost/39785] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 23:10:20,028 INFO [Listener at localhost/39785] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 23:10:20,028 INFO [Listener at localhost/39785] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 23:10:20,031 INFO [Listener at localhost/39785] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-24 23:10:20,031 INFO [Listener at localhost/39785] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 23:10:20,031 INFO [Listener at localhost/39785] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 23:10:20,036 INFO [Listener at localhost/39785] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 23:10:20,112 INFO [Listener at localhost/39785] http.HttpServer(1146): Jetty bound to port 46533 2023-07-24 23:10:20,114 INFO [Listener at localhost/39785] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 23:10:20,148 INFO [Listener at localhost/39785] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 23:10:20,152 INFO [Listener at localhost/39785] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@48ee05fa{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/hadoop.log.dir/,AVAILABLE} 2023-07-24 23:10:20,153 INFO [Listener at localhost/39785] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 23:10:20,153 INFO [Listener at localhost/39785] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4ff95bf2{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 23:10:20,418 INFO [Listener at localhost/39785] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 23:10:20,432 INFO [Listener at localhost/39785] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 23:10:20,432 INFO [Listener at localhost/39785] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 23:10:20,434 INFO [Listener at localhost/39785] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-24 23:10:20,441 INFO [Listener at localhost/39785] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 23:10:20,469 INFO [Listener at localhost/39785] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@5f9ed0a6{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/java.io.tmpdir/jetty-0_0_0_0-46533-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4044726758893951888/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-24 23:10:20,480 INFO [Listener at localhost/39785] server.AbstractConnector(333): Started ServerConnector@7d3cded5{HTTP/1.1, (http/1.1)}{0.0.0.0:46533} 2023-07-24 23:10:20,481 INFO [Listener at localhost/39785] server.Server(415): Started @7784ms 2023-07-24 23:10:20,484 INFO [Listener at localhost/39785] master.HMaster(444): hbase.rootdir=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c, hbase.cluster.distributed=false 2023-07-24 23:10:20,581 INFO [Listener at localhost/39785] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 23:10:20,582 INFO [Listener at localhost/39785] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 23:10:20,582 INFO [Listener at localhost/39785] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 23:10:20,582 INFO [Listener at localhost/39785] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 23:10:20,583 INFO [Listener at localhost/39785] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 23:10:20,583 INFO [Listener at localhost/39785] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 23:10:20,591 INFO [Listener at localhost/39785] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 23:10:20,596 INFO [Listener at localhost/39785] ipc.NettyRpcServer(120): Bind to /172.31.14.131:36981 2023-07-24 23:10:20,599 INFO [Listener at localhost/39785] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 23:10:20,622 DEBUG [Listener at localhost/39785] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 23:10:20,624 INFO [Listener at localhost/39785] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 23:10:20,627 INFO [Listener at localhost/39785] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 23:10:20,629 INFO [Listener at localhost/39785] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:36981 connecting to ZooKeeper ensemble=127.0.0.1:59310 2023-07-24 23:10:20,658 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): regionserver:369810x0, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 23:10:20,660 DEBUG [Listener at localhost/39785] zookeeper.ZKUtil(164): regionserver:369810x0, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 23:10:20,687 DEBUG [Listener at localhost/39785] zookeeper.ZKUtil(164): regionserver:369810x0, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 23:10:20,688 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:36981-0x1019999755d0001 connected 2023-07-24 23:10:20,691 DEBUG [Listener at localhost/39785] zookeeper.ZKUtil(164): regionserver:36981-0x1019999755d0001, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 23:10:20,738 DEBUG [Listener at localhost/39785] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36981 2023-07-24 23:10:20,749 DEBUG [Listener at localhost/39785] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36981 2023-07-24 23:10:20,768 DEBUG [Listener at localhost/39785] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36981 2023-07-24 23:10:20,769 DEBUG [Listener at localhost/39785] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36981 2023-07-24 23:10:20,770 DEBUG [Listener at localhost/39785] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36981 2023-07-24 23:10:20,773 INFO [Listener at localhost/39785] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 23:10:20,773 INFO [Listener at localhost/39785] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 23:10:20,774 INFO [Listener at localhost/39785] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 23:10:20,775 INFO [Listener at localhost/39785] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 23:10:20,776 INFO [Listener at localhost/39785] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 23:10:20,776 INFO [Listener at localhost/39785] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 23:10:20,776 INFO [Listener at localhost/39785] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 23:10:20,778 INFO [Listener at localhost/39785] http.HttpServer(1146): Jetty bound to port 43055 2023-07-24 23:10:20,778 INFO [Listener at localhost/39785] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 23:10:20,783 INFO [Listener at localhost/39785] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 23:10:20,783 INFO [Listener at localhost/39785] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@30b14bcd{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/hadoop.log.dir/,AVAILABLE} 2023-07-24 23:10:20,784 INFO [Listener at localhost/39785] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 23:10:20,784 INFO [Listener at localhost/39785] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3e03ec49{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 23:10:20,945 INFO [Listener at localhost/39785] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 23:10:20,947 INFO [Listener at localhost/39785] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 23:10:20,947 INFO [Listener at localhost/39785] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 23:10:20,947 INFO [Listener at localhost/39785] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-24 23:10:20,949 INFO [Listener at localhost/39785] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 23:10:20,954 INFO [Listener at localhost/39785] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@333a682e{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/java.io.tmpdir/jetty-0_0_0_0-43055-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7588732286233049037/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 23:10:20,956 INFO [Listener at localhost/39785] server.AbstractConnector(333): Started ServerConnector@32ff9d35{HTTP/1.1, (http/1.1)}{0.0.0.0:43055} 2023-07-24 23:10:20,956 INFO [Listener at localhost/39785] server.Server(415): Started @8260ms 2023-07-24 23:10:20,975 INFO [Listener at localhost/39785] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 23:10:20,976 INFO [Listener at localhost/39785] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 23:10:20,976 INFO [Listener at localhost/39785] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 23:10:20,976 INFO [Listener at localhost/39785] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 23:10:20,977 INFO [Listener at localhost/39785] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 23:10:20,977 INFO [Listener at localhost/39785] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 23:10:20,977 INFO [Listener at localhost/39785] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 23:10:20,980 INFO [Listener at localhost/39785] ipc.NettyRpcServer(120): Bind to /172.31.14.131:42429 2023-07-24 23:10:20,980 INFO [Listener at localhost/39785] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 23:10:20,983 DEBUG [Listener at localhost/39785] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 23:10:20,984 INFO [Listener at localhost/39785] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 23:10:20,985 INFO [Listener at localhost/39785] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 23:10:20,987 INFO [Listener at localhost/39785] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:42429 connecting to ZooKeeper ensemble=127.0.0.1:59310 2023-07-24 23:10:20,991 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): regionserver:424290x0, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 23:10:20,993 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:42429-0x1019999755d0002 connected 2023-07-24 23:10:20,993 DEBUG [Listener at localhost/39785] zookeeper.ZKUtil(164): regionserver:42429-0x1019999755d0002, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 23:10:20,994 DEBUG [Listener at localhost/39785] zookeeper.ZKUtil(164): regionserver:42429-0x1019999755d0002, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 23:10:20,995 DEBUG [Listener at localhost/39785] zookeeper.ZKUtil(164): regionserver:42429-0x1019999755d0002, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 23:10:21,001 DEBUG [Listener at localhost/39785] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42429 2023-07-24 23:10:21,001 DEBUG [Listener at localhost/39785] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42429 2023-07-24 23:10:21,002 DEBUG [Listener at localhost/39785] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42429 2023-07-24 23:10:21,002 DEBUG [Listener at localhost/39785] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42429 2023-07-24 23:10:21,003 DEBUG [Listener at localhost/39785] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42429 2023-07-24 23:10:21,006 INFO [Listener at localhost/39785] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 23:10:21,006 INFO [Listener at localhost/39785] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 23:10:21,006 INFO [Listener at localhost/39785] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 23:10:21,007 INFO [Listener at localhost/39785] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 23:10:21,007 INFO [Listener at localhost/39785] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 23:10:21,007 INFO [Listener at localhost/39785] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 23:10:21,007 INFO [Listener at localhost/39785] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 23:10:21,008 INFO [Listener at localhost/39785] http.HttpServer(1146): Jetty bound to port 35787 2023-07-24 23:10:21,008 INFO [Listener at localhost/39785] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 23:10:21,015 INFO [Listener at localhost/39785] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 23:10:21,015 INFO [Listener at localhost/39785] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@17d42069{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/hadoop.log.dir/,AVAILABLE} 2023-07-24 23:10:21,015 INFO [Listener at localhost/39785] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 23:10:21,016 INFO [Listener at localhost/39785] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6e9c82{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 23:10:21,161 INFO [Listener at localhost/39785] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 23:10:21,162 INFO [Listener at localhost/39785] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 23:10:21,162 INFO [Listener at localhost/39785] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 23:10:21,162 INFO [Listener at localhost/39785] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 23:10:21,164 INFO [Listener at localhost/39785] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 23:10:21,165 INFO [Listener at localhost/39785] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@659da4f9{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/java.io.tmpdir/jetty-0_0_0_0-35787-hbase-server-2_4_18-SNAPSHOT_jar-_-any-1049168494059842255/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 23:10:21,168 INFO [Listener at localhost/39785] server.AbstractConnector(333): Started ServerConnector@71b1cadf{HTTP/1.1, (http/1.1)}{0.0.0.0:35787} 2023-07-24 23:10:21,168 INFO [Listener at localhost/39785] server.Server(415): Started @8472ms 2023-07-24 23:10:21,186 INFO [Listener at localhost/39785] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 23:10:21,187 INFO [Listener at localhost/39785] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 23:10:21,187 INFO [Listener at localhost/39785] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 23:10:21,187 INFO [Listener at localhost/39785] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 23:10:21,187 INFO [Listener at localhost/39785] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 23:10:21,187 INFO [Listener at localhost/39785] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 23:10:21,188 INFO [Listener at localhost/39785] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 23:10:21,192 INFO [Listener at localhost/39785] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33649 2023-07-24 23:10:21,193 INFO [Listener at localhost/39785] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 23:10:21,200 DEBUG [Listener at localhost/39785] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 23:10:21,201 INFO [Listener at localhost/39785] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 23:10:21,204 INFO [Listener at localhost/39785] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 23:10:21,206 INFO [Listener at localhost/39785] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33649 connecting to ZooKeeper ensemble=127.0.0.1:59310 2023-07-24 23:10:21,216 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): regionserver:336490x0, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 23:10:21,223 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33649-0x1019999755d0003 connected 2023-07-24 23:10:21,223 DEBUG [Listener at localhost/39785] zookeeper.ZKUtil(164): regionserver:33649-0x1019999755d0003, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 23:10:21,224 DEBUG [Listener at localhost/39785] zookeeper.ZKUtil(164): regionserver:33649-0x1019999755d0003, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 23:10:21,226 DEBUG [Listener at localhost/39785] zookeeper.ZKUtil(164): regionserver:33649-0x1019999755d0003, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 23:10:21,230 DEBUG [Listener at localhost/39785] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33649 2023-07-24 23:10:21,233 DEBUG [Listener at localhost/39785] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33649 2023-07-24 23:10:21,234 DEBUG [Listener at localhost/39785] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33649 2023-07-24 23:10:21,235 DEBUG [Listener at localhost/39785] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33649 2023-07-24 23:10:21,235 DEBUG [Listener at localhost/39785] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33649 2023-07-24 23:10:21,238 INFO [Listener at localhost/39785] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 23:10:21,239 INFO [Listener at localhost/39785] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 23:10:21,239 INFO [Listener at localhost/39785] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 23:10:21,240 INFO [Listener at localhost/39785] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 23:10:21,240 INFO [Listener at localhost/39785] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 23:10:21,240 INFO [Listener at localhost/39785] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 23:10:21,241 INFO [Listener at localhost/39785] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 23:10:21,243 INFO [Listener at localhost/39785] http.HttpServer(1146): Jetty bound to port 41735 2023-07-24 23:10:21,243 INFO [Listener at localhost/39785] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 23:10:21,245 INFO [Listener at localhost/39785] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 23:10:21,245 INFO [Listener at localhost/39785] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@ebb434d{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/hadoop.log.dir/,AVAILABLE} 2023-07-24 23:10:21,246 INFO [Listener at localhost/39785] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 23:10:21,246 INFO [Listener at localhost/39785] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7756df1d{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 23:10:21,384 INFO [Listener at localhost/39785] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 23:10:21,385 INFO [Listener at localhost/39785] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 23:10:21,385 INFO [Listener at localhost/39785] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 23:10:21,386 INFO [Listener at localhost/39785] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-24 23:10:21,387 INFO [Listener at localhost/39785] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 23:10:21,387 INFO [Listener at localhost/39785] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1001ad12{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/java.io.tmpdir/jetty-0_0_0_0-41735-hbase-server-2_4_18-SNAPSHOT_jar-_-any-575528047414072594/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 23:10:21,389 INFO [Listener at localhost/39785] server.AbstractConnector(333): Started ServerConnector@44a90bfb{HTTP/1.1, (http/1.1)}{0.0.0.0:41735} 2023-07-24 23:10:21,389 INFO [Listener at localhost/39785] server.Server(415): Started @8692ms 2023-07-24 23:10:21,394 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 23:10:21,398 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@30d25553{HTTP/1.1, (http/1.1)}{0.0.0.0:33125} 2023-07-24 23:10:21,398 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @8702ms 2023-07-24 23:10:21,399 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,42959,1690240218606 2023-07-24 23:10:21,408 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-24 23:10:21,410 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,42959,1690240218606 2023-07-24 23:10:21,435 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): regionserver:33649-0x1019999755d0003, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 23:10:21,435 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 23:10:21,435 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): regionserver:36981-0x1019999755d0001, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 23:10:21,436 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 23:10:21,436 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): regionserver:42429-0x1019999755d0002, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 23:10:21,438 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 23:10:21,441 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,42959,1690240218606 from backup master directory 2023-07-24 23:10:21,441 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 23:10:21,444 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,42959,1690240218606 2023-07-24 23:10:21,444 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-24 23:10:21,445 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 23:10:21,445 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,42959,1690240218606 2023-07-24 23:10:21,450 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-24 23:10:21,452 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-24 23:10:21,556 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/hbase.id with ID: 84747357-cecf-4454-93dc-a1cdf648adda 2023-07-24 23:10:21,599 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 23:10:21,617 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 23:10:21,666 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x5970ea93 to 127.0.0.1:59310 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 23:10:21,697 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5dd9161a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 23:10:21,722 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 23:10:21,724 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-24 23:10:21,747 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-24 23:10:21,747 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-24 23:10:21,749 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-24 23:10:21,754 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-24 23:10:21,756 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 23:10:21,805 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/MasterData/data/master/store-tmp 2023-07-24 23:10:21,842 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:21,842 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-24 23:10:21,842 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 23:10:21,842 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 23:10:21,842 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-24 23:10:21,842 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 23:10:21,842 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 23:10:21,843 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 23:10:21,844 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/MasterData/WALs/jenkins-hbase4.apache.org,42959,1690240218606 2023-07-24 23:10:21,866 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42959%2C1690240218606, suffix=, logDir=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/MasterData/WALs/jenkins-hbase4.apache.org,42959,1690240218606, archiveDir=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/MasterData/oldWALs, maxLogs=10 2023-07-24 23:10:21,940 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39741,DS-8317c52f-8ef5-4f17-a0c2-afb6962c43fa,DISK] 2023-07-24 23:10:21,940 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46461,DS-ea853878-8ff0-4830-8e4e-e0b850d87b95,DISK] 2023-07-24 23:10:21,940 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46677,DS-99f991c9-beb0-41c1-9404-df7150cba31b,DISK] 2023-07-24 23:10:21,950 DEBUG [RS-EventLoopGroup-5-2] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-24 23:10:22,031 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/MasterData/WALs/jenkins-hbase4.apache.org,42959,1690240218606/jenkins-hbase4.apache.org%2C42959%2C1690240218606.1690240221875 2023-07-24 23:10:22,032 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46677,DS-99f991c9-beb0-41c1-9404-df7150cba31b,DISK], DatanodeInfoWithStorage[127.0.0.1:39741,DS-8317c52f-8ef5-4f17-a0c2-afb6962c43fa,DISK], DatanodeInfoWithStorage[127.0.0.1:46461,DS-ea853878-8ff0-4830-8e4e-e0b850d87b95,DISK]] 2023-07-24 23:10:22,032 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-24 23:10:22,033 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:22,036 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 23:10:22,038 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 23:10:22,105 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-24 23:10:22,112 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-24 23:10:22,141 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-24 23:10:22,153 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:22,158 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-24 23:10:22,160 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-24 23:10:22,175 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 23:10:22,179 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 23:10:22,180 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11711022240, jitterRate=0.09067393839359283}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 23:10:22,180 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 23:10:22,181 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-24 23:10:22,203 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-24 23:10:22,203 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-24 23:10:22,206 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-24 23:10:22,208 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-24 23:10:22,247 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 38 msec 2023-07-24 23:10:22,247 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-24 23:10:22,272 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-24 23:10:22,277 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-24 23:10:22,285 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-24 23:10:22,290 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-24 23:10:22,295 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-24 23:10:22,299 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 23:10:22,300 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-24 23:10:22,300 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-24 23:10:22,314 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-24 23:10:22,318 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): regionserver:42429-0x1019999755d0002, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 23:10:22,318 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 23:10:22,318 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): regionserver:33649-0x1019999755d0003, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 23:10:22,318 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): regionserver:36981-0x1019999755d0001, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 23:10:22,319 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 23:10:22,319 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,42959,1690240218606, sessionid=0x1019999755d0000, setting cluster-up flag (Was=false) 2023-07-24 23:10:22,337 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 23:10:22,343 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-24 23:10:22,344 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,42959,1690240218606 2023-07-24 23:10:22,350 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 23:10:22,355 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-24 23:10:22,356 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,42959,1690240218606 2023-07-24 23:10:22,359 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.hbase-snapshot/.tmp 2023-07-24 23:10:22,393 INFO [RS:0;jenkins-hbase4:36981] regionserver.HRegionServer(951): ClusterId : 84747357-cecf-4454-93dc-a1cdf648adda 2023-07-24 23:10:22,393 INFO [RS:2;jenkins-hbase4:33649] regionserver.HRegionServer(951): ClusterId : 84747357-cecf-4454-93dc-a1cdf648adda 2023-07-24 23:10:22,393 INFO [RS:1;jenkins-hbase4:42429] regionserver.HRegionServer(951): ClusterId : 84747357-cecf-4454-93dc-a1cdf648adda 2023-07-24 23:10:22,399 DEBUG [RS:0;jenkins-hbase4:36981] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 23:10:22,399 DEBUG [RS:1;jenkins-hbase4:42429] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 23:10:22,399 DEBUG [RS:2;jenkins-hbase4:33649] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 23:10:22,407 DEBUG [RS:0;jenkins-hbase4:36981] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 23:10:22,407 DEBUG [RS:1;jenkins-hbase4:42429] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 23:10:22,407 DEBUG [RS:2;jenkins-hbase4:33649] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 23:10:22,407 DEBUG [RS:1;jenkins-hbase4:42429] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 23:10:22,407 DEBUG [RS:0;jenkins-hbase4:36981] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 23:10:22,407 DEBUG [RS:2;jenkins-hbase4:33649] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 23:10:22,411 DEBUG [RS:0;jenkins-hbase4:36981] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 23:10:22,411 DEBUG [RS:1;jenkins-hbase4:42429] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 23:10:22,412 DEBUG [RS:2;jenkins-hbase4:33649] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 23:10:22,412 DEBUG [RS:0;jenkins-hbase4:36981] zookeeper.ReadOnlyZKClient(139): Connect 0x7c50fe2d to 127.0.0.1:59310 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 23:10:22,413 DEBUG [RS:1;jenkins-hbase4:42429] zookeeper.ReadOnlyZKClient(139): Connect 0x58697212 to 127.0.0.1:59310 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 23:10:22,413 DEBUG [RS:2;jenkins-hbase4:33649] zookeeper.ReadOnlyZKClient(139): Connect 0x42022bb8 to 127.0.0.1:59310 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 23:10:22,422 DEBUG [RS:0;jenkins-hbase4:36981] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@d20ccd8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 23:10:22,422 DEBUG [RS:1;jenkins-hbase4:42429] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@fbc8d14, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 23:10:22,422 DEBUG [RS:0;jenkins-hbase4:36981] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5def039d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 23:10:22,422 DEBUG [RS:1;jenkins-hbase4:42429] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@20f7b575, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 23:10:22,423 DEBUG [RS:2;jenkins-hbase4:33649] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4cc5dd88, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 23:10:22,423 DEBUG [RS:2;jenkins-hbase4:33649] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@48fa40c8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 23:10:22,451 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-24 23:10:22,452 DEBUG [RS:0;jenkins-hbase4:36981] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:36981 2023-07-24 23:10:22,454 DEBUG [RS:2;jenkins-hbase4:33649] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:33649 2023-07-24 23:10:22,454 DEBUG [RS:1;jenkins-hbase4:42429] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:42429 2023-07-24 23:10:22,459 INFO [RS:2;jenkins-hbase4:33649] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 23:10:22,459 INFO [RS:0;jenkins-hbase4:36981] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 23:10:22,459 INFO [RS:1;jenkins-hbase4:42429] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 23:10:22,459 INFO [RS:1;jenkins-hbase4:42429] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 23:10:22,459 INFO [RS:0;jenkins-hbase4:36981] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 23:10:22,459 INFO [RS:2;jenkins-hbase4:33649] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 23:10:22,460 DEBUG [RS:0;jenkins-hbase4:36981] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 23:10:22,460 DEBUG [RS:1;jenkins-hbase4:42429] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 23:10:22,460 DEBUG [RS:2;jenkins-hbase4:33649] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 23:10:22,465 INFO [RS:0;jenkins-hbase4:36981] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42959,1690240218606 with isa=jenkins-hbase4.apache.org/172.31.14.131:36981, startcode=1690240220580 2023-07-24 23:10:22,465 INFO [RS:2;jenkins-hbase4:33649] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42959,1690240218606 with isa=jenkins-hbase4.apache.org/172.31.14.131:33649, startcode=1690240221185 2023-07-24 23:10:22,465 INFO [RS:1;jenkins-hbase4:42429] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42959,1690240218606 with isa=jenkins-hbase4.apache.org/172.31.14.131:42429, startcode=1690240220974 2023-07-24 23:10:22,465 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-24 23:10:22,468 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42959,1690240218606] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 23:10:22,471 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-24 23:10:22,472 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-24 23:10:22,491 DEBUG [RS:0;jenkins-hbase4:36981] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 23:10:22,491 DEBUG [RS:2;jenkins-hbase4:33649] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 23:10:22,491 DEBUG [RS:1;jenkins-hbase4:42429] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 23:10:22,562 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38607, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 23:10:22,562 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40721, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 23:10:22,562 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55695, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 23:10:22,573 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42959] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 23:10:22,581 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-24 23:10:22,583 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42959] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 23:10:22,584 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42959] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 23:10:22,609 DEBUG [RS:0;jenkins-hbase4:36981] regionserver.HRegionServer(2830): Master is not running yet 2023-07-24 23:10:22,609 DEBUG [RS:1;jenkins-hbase4:42429] regionserver.HRegionServer(2830): Master is not running yet 2023-07-24 23:10:22,609 WARN [RS:0;jenkins-hbase4:36981] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-24 23:10:22,609 WARN [RS:1;jenkins-hbase4:42429] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-24 23:10:22,609 DEBUG [RS:2;jenkins-hbase4:33649] regionserver.HRegionServer(2830): Master is not running yet 2023-07-24 23:10:22,610 WARN [RS:2;jenkins-hbase4:33649] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-24 23:10:22,635 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-24 23:10:22,642 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-24 23:10:22,643 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-24 23:10:22,643 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-24 23:10:22,645 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 23:10:22,645 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 23:10:22,645 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 23:10:22,645 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 23:10:22,645 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-24 23:10:22,645 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:22,645 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 23:10:22,646 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:22,650 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1690240252650 2023-07-24 23:10:22,653 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-24 23:10:22,657 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-24 23:10:22,661 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-24 23:10:22,662 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-24 23:10:22,668 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-24 23:10:22,669 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-24 23:10:22,669 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-24 23:10:22,669 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-24 23:10:22,669 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-24 23:10:22,670 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:22,671 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-24 23:10:22,674 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-24 23:10:22,674 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-24 23:10:22,680 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-24 23:10:22,681 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-24 23:10:22,683 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690240222683,5,FailOnTimeoutGroup] 2023-07-24 23:10:22,685 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690240222684,5,FailOnTimeoutGroup] 2023-07-24 23:10:22,686 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:22,686 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-24 23:10:22,687 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:22,687 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:22,710 INFO [RS:0;jenkins-hbase4:36981] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42959,1690240218606 with isa=jenkins-hbase4.apache.org/172.31.14.131:36981, startcode=1690240220580 2023-07-24 23:10:22,711 INFO [RS:1;jenkins-hbase4:42429] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42959,1690240218606 with isa=jenkins-hbase4.apache.org/172.31.14.131:42429, startcode=1690240220974 2023-07-24 23:10:22,711 INFO [RS:2;jenkins-hbase4:33649] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42959,1690240218606 with isa=jenkins-hbase4.apache.org/172.31.14.131:33649, startcode=1690240221185 2023-07-24 23:10:22,716 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42959] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,36981,1690240220580 2023-07-24 23:10:22,717 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42959,1690240218606] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 23:10:22,718 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42959,1690240218606] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-24 23:10:22,722 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42959] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,42429,1690240220974 2023-07-24 23:10:22,722 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42959,1690240218606] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 23:10:22,723 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42959,1690240218606] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-24 23:10:22,724 DEBUG [RS:0;jenkins-hbase4:36981] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c 2023-07-24 23:10:22,724 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42959] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,33649,1690240221185 2023-07-24 23:10:22,724 DEBUG [RS:0;jenkins-hbase4:36981] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:38733 2023-07-24 23:10:22,724 DEBUG [RS:0;jenkins-hbase4:36981] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=46533 2023-07-24 23:10:22,724 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42959,1690240218606] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 23:10:22,725 DEBUG [RS:1;jenkins-hbase4:42429] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c 2023-07-24 23:10:22,725 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42959,1690240218606] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-24 23:10:22,725 DEBUG [RS:1;jenkins-hbase4:42429] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:38733 2023-07-24 23:10:22,726 DEBUG [RS:1;jenkins-hbase4:42429] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=46533 2023-07-24 23:10:22,726 DEBUG [RS:2;jenkins-hbase4:33649] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c 2023-07-24 23:10:22,726 DEBUG [RS:2;jenkins-hbase4:33649] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:38733 2023-07-24 23:10:22,726 DEBUG [RS:2;jenkins-hbase4:33649] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=46533 2023-07-24 23:10:22,741 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 23:10:22,742 DEBUG [RS:0;jenkins-hbase4:36981] zookeeper.ZKUtil(162): regionserver:36981-0x1019999755d0001, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36981,1690240220580 2023-07-24 23:10:22,742 DEBUG [RS:2;jenkins-hbase4:33649] zookeeper.ZKUtil(162): regionserver:33649-0x1019999755d0003, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33649,1690240221185 2023-07-24 23:10:22,742 WARN [RS:0;jenkins-hbase4:36981] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 23:10:22,742 INFO [RS:0;jenkins-hbase4:36981] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 23:10:22,743 DEBUG [RS:0;jenkins-hbase4:36981] regionserver.HRegionServer(1948): logDir=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/WALs/jenkins-hbase4.apache.org,36981,1690240220580 2023-07-24 23:10:22,743 DEBUG [RS:1;jenkins-hbase4:42429] zookeeper.ZKUtil(162): regionserver:42429-0x1019999755d0002, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42429,1690240220974 2023-07-24 23:10:22,742 WARN [RS:2;jenkins-hbase4:33649] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 23:10:22,743 WARN [RS:1;jenkins-hbase4:42429] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 23:10:22,746 INFO [RS:2;jenkins-hbase4:33649] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 23:10:22,746 INFO [RS:1;jenkins-hbase4:42429] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 23:10:22,747 DEBUG [RS:2;jenkins-hbase4:33649] regionserver.HRegionServer(1948): logDir=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/WALs/jenkins-hbase4.apache.org,33649,1690240221185 2023-07-24 23:10:22,747 DEBUG [RS:1;jenkins-hbase4:42429] regionserver.HRegionServer(1948): logDir=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/WALs/jenkins-hbase4.apache.org,42429,1690240220974 2023-07-24 23:10:22,755 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,33649,1690240221185] 2023-07-24 23:10:22,755 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,36981,1690240220580] 2023-07-24 23:10:22,755 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,42429,1690240220974] 2023-07-24 23:10:22,767 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-24 23:10:22,768 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-24 23:10:22,769 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c 2023-07-24 23:10:22,772 DEBUG [RS:1;jenkins-hbase4:42429] zookeeper.ZKUtil(162): regionserver:42429-0x1019999755d0002, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36981,1690240220580 2023-07-24 23:10:22,773 DEBUG [RS:1;jenkins-hbase4:42429] zookeeper.ZKUtil(162): regionserver:42429-0x1019999755d0002, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33649,1690240221185 2023-07-24 23:10:22,773 DEBUG [RS:1;jenkins-hbase4:42429] zookeeper.ZKUtil(162): regionserver:42429-0x1019999755d0002, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42429,1690240220974 2023-07-24 23:10:22,780 DEBUG [RS:2;jenkins-hbase4:33649] zookeeper.ZKUtil(162): regionserver:33649-0x1019999755d0003, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36981,1690240220580 2023-07-24 23:10:22,780 DEBUG [RS:0;jenkins-hbase4:36981] zookeeper.ZKUtil(162): regionserver:36981-0x1019999755d0001, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36981,1690240220580 2023-07-24 23:10:22,781 DEBUG [RS:2;jenkins-hbase4:33649] zookeeper.ZKUtil(162): regionserver:33649-0x1019999755d0003, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33649,1690240221185 2023-07-24 23:10:22,781 DEBUG [RS:0;jenkins-hbase4:36981] zookeeper.ZKUtil(162): regionserver:36981-0x1019999755d0001, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33649,1690240221185 2023-07-24 23:10:22,783 DEBUG [RS:2;jenkins-hbase4:33649] zookeeper.ZKUtil(162): regionserver:33649-0x1019999755d0003, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42429,1690240220974 2023-07-24 23:10:22,783 DEBUG [RS:0;jenkins-hbase4:36981] zookeeper.ZKUtil(162): regionserver:36981-0x1019999755d0001, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42429,1690240220974 2023-07-24 23:10:22,794 DEBUG [RS:0;jenkins-hbase4:36981] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 23:10:22,794 DEBUG [RS:2;jenkins-hbase4:33649] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 23:10:22,794 DEBUG [RS:1;jenkins-hbase4:42429] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 23:10:22,808 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:22,814 INFO [RS:1;jenkins-hbase4:42429] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 23:10:22,814 INFO [RS:0;jenkins-hbase4:36981] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 23:10:22,815 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-24 23:10:22,815 INFO [RS:2;jenkins-hbase4:33649] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 23:10:22,818 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/meta/1588230740/info 2023-07-24 23:10:22,819 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-24 23:10:22,822 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:22,822 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-24 23:10:22,826 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/meta/1588230740/rep_barrier 2023-07-24 23:10:22,827 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-24 23:10:22,828 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:22,828 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-24 23:10:22,830 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/meta/1588230740/table 2023-07-24 23:10:22,831 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-24 23:10:22,832 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:22,834 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/meta/1588230740 2023-07-24 23:10:22,840 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/meta/1588230740 2023-07-24 23:10:22,849 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-24 23:10:22,852 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-24 23:10:22,855 INFO [RS:0;jenkins-hbase4:36981] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 23:10:22,857 INFO [RS:2;jenkins-hbase4:33649] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 23:10:22,858 INFO [RS:1;jenkins-hbase4:42429] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 23:10:22,863 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 23:10:22,864 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10700275840, jitterRate=-0.003459155559539795}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-24 23:10:22,864 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-24 23:10:22,865 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-24 23:10:22,865 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-24 23:10:22,865 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-24 23:10:22,865 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-24 23:10:22,865 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-24 23:10:22,865 INFO [RS:0;jenkins-hbase4:36981] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 23:10:22,865 INFO [RS:2;jenkins-hbase4:33649] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 23:10:22,865 INFO [RS:1;jenkins-hbase4:42429] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 23:10:22,867 INFO [RS:2;jenkins-hbase4:33649] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:22,867 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-24 23:10:22,866 INFO [RS:0;jenkins-hbase4:36981] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:22,867 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-24 23:10:22,867 INFO [RS:1;jenkins-hbase4:42429] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:22,867 INFO [RS:2;jenkins-hbase4:33649] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 23:10:22,870 INFO [RS:0;jenkins-hbase4:36981] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 23:10:22,872 INFO [RS:1;jenkins-hbase4:42429] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 23:10:22,875 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-24 23:10:22,875 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-24 23:10:22,880 INFO [RS:2;jenkins-hbase4:33649] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:22,880 INFO [RS:0;jenkins-hbase4:36981] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:22,880 INFO [RS:1;jenkins-hbase4:42429] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:22,881 DEBUG [RS:2;jenkins-hbase4:33649] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:22,881 DEBUG [RS:0;jenkins-hbase4:36981] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:22,882 DEBUG [RS:2;jenkins-hbase4:33649] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:22,881 DEBUG [RS:1;jenkins-hbase4:42429] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:22,882 DEBUG [RS:2;jenkins-hbase4:33649] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:22,882 DEBUG [RS:1;jenkins-hbase4:42429] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:22,882 DEBUG [RS:2;jenkins-hbase4:33649] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:22,882 DEBUG [RS:1;jenkins-hbase4:42429] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:22,882 DEBUG [RS:2;jenkins-hbase4:33649] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:22,882 DEBUG [RS:1;jenkins-hbase4:42429] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:22,882 DEBUG [RS:2;jenkins-hbase4:33649] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 23:10:22,882 DEBUG [RS:1;jenkins-hbase4:42429] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:22,882 DEBUG [RS:2;jenkins-hbase4:33649] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:22,882 DEBUG [RS:0;jenkins-hbase4:36981] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:22,882 DEBUG [RS:2;jenkins-hbase4:33649] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:22,882 DEBUG [RS:1;jenkins-hbase4:42429] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 23:10:22,882 DEBUG [RS:2;jenkins-hbase4:33649] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:22,882 DEBUG [RS:0;jenkins-hbase4:36981] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:22,883 DEBUG [RS:2;jenkins-hbase4:33649] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:22,883 DEBUG [RS:1;jenkins-hbase4:42429] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:22,883 DEBUG [RS:0;jenkins-hbase4:36981] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:22,883 DEBUG [RS:1;jenkins-hbase4:42429] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:22,884 DEBUG [RS:0;jenkins-hbase4:36981] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:22,884 DEBUG [RS:1;jenkins-hbase4:42429] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:22,884 DEBUG [RS:0;jenkins-hbase4:36981] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 23:10:22,884 DEBUG [RS:1;jenkins-hbase4:42429] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:22,884 DEBUG [RS:0;jenkins-hbase4:36981] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:22,884 DEBUG [RS:0;jenkins-hbase4:36981] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:22,884 DEBUG [RS:0;jenkins-hbase4:36981] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:22,884 DEBUG [RS:0;jenkins-hbase4:36981] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:22,886 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-24 23:10:22,887 INFO [RS:2;jenkins-hbase4:33649] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:22,888 INFO [RS:2;jenkins-hbase4:33649] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:22,888 INFO [RS:0;jenkins-hbase4:36981] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:22,888 INFO [RS:2;jenkins-hbase4:33649] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:22,888 INFO [RS:0;jenkins-hbase4:36981] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:22,888 INFO [RS:0;jenkins-hbase4:36981] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:22,894 INFO [RS:1;jenkins-hbase4:42429] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:22,895 INFO [RS:1;jenkins-hbase4:42429] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:22,895 INFO [RS:1;jenkins-hbase4:42429] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:22,904 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-24 23:10:22,911 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-24 23:10:22,916 INFO [RS:2;jenkins-hbase4:33649] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 23:10:22,916 INFO [RS:0;jenkins-hbase4:36981] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 23:10:22,917 INFO [RS:1;jenkins-hbase4:42429] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 23:10:22,920 INFO [RS:0;jenkins-hbase4:36981] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36981,1690240220580-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:22,920 INFO [RS:1;jenkins-hbase4:42429] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42429,1690240220974-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:22,920 INFO [RS:2;jenkins-hbase4:33649] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33649,1690240221185-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:22,938 INFO [RS:0;jenkins-hbase4:36981] regionserver.Replication(203): jenkins-hbase4.apache.org,36981,1690240220580 started 2023-07-24 23:10:22,938 INFO [RS:1;jenkins-hbase4:42429] regionserver.Replication(203): jenkins-hbase4.apache.org,42429,1690240220974 started 2023-07-24 23:10:22,938 INFO [RS:0;jenkins-hbase4:36981] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,36981,1690240220580, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:36981, sessionid=0x1019999755d0001 2023-07-24 23:10:22,938 INFO [RS:2;jenkins-hbase4:33649] regionserver.Replication(203): jenkins-hbase4.apache.org,33649,1690240221185 started 2023-07-24 23:10:22,938 INFO [RS:1;jenkins-hbase4:42429] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,42429,1690240220974, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:42429, sessionid=0x1019999755d0002 2023-07-24 23:10:22,938 INFO [RS:2;jenkins-hbase4:33649] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,33649,1690240221185, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:33649, sessionid=0x1019999755d0003 2023-07-24 23:10:22,938 DEBUG [RS:0;jenkins-hbase4:36981] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 23:10:22,938 DEBUG [RS:2;jenkins-hbase4:33649] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 23:10:22,938 DEBUG [RS:1;jenkins-hbase4:42429] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 23:10:22,939 DEBUG [RS:2;jenkins-hbase4:33649] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,33649,1690240221185 2023-07-24 23:10:22,939 DEBUG [RS:0;jenkins-hbase4:36981] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,36981,1690240220580 2023-07-24 23:10:22,939 DEBUG [RS:2;jenkins-hbase4:33649] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33649,1690240221185' 2023-07-24 23:10:22,939 DEBUG [RS:0;jenkins-hbase4:36981] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36981,1690240220580' 2023-07-24 23:10:22,939 DEBUG [RS:1;jenkins-hbase4:42429] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,42429,1690240220974 2023-07-24 23:10:22,940 DEBUG [RS:1;jenkins-hbase4:42429] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42429,1690240220974' 2023-07-24 23:10:22,940 DEBUG [RS:1;jenkins-hbase4:42429] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 23:10:22,940 DEBUG [RS:0;jenkins-hbase4:36981] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 23:10:22,940 DEBUG [RS:2;jenkins-hbase4:33649] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 23:10:22,941 DEBUG [RS:1;jenkins-hbase4:42429] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 23:10:22,941 DEBUG [RS:0;jenkins-hbase4:36981] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 23:10:22,941 DEBUG [RS:2;jenkins-hbase4:33649] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 23:10:22,942 DEBUG [RS:1;jenkins-hbase4:42429] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 23:10:22,942 DEBUG [RS:1;jenkins-hbase4:42429] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 23:10:22,942 DEBUG [RS:1;jenkins-hbase4:42429] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,42429,1690240220974 2023-07-24 23:10:22,942 DEBUG [RS:0;jenkins-hbase4:36981] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 23:10:22,942 DEBUG [RS:1;jenkins-hbase4:42429] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42429,1690240220974' 2023-07-24 23:10:22,942 DEBUG [RS:1;jenkins-hbase4:42429] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 23:10:22,942 DEBUG [RS:2;jenkins-hbase4:33649] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 23:10:22,942 DEBUG [RS:0;jenkins-hbase4:36981] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 23:10:22,942 DEBUG [RS:2;jenkins-hbase4:33649] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 23:10:22,942 DEBUG [RS:0;jenkins-hbase4:36981] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,36981,1690240220580 2023-07-24 23:10:22,943 DEBUG [RS:1;jenkins-hbase4:42429] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 23:10:22,943 DEBUG [RS:0;jenkins-hbase4:36981] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36981,1690240220580' 2023-07-24 23:10:22,943 DEBUG [RS:0;jenkins-hbase4:36981] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 23:10:22,943 DEBUG [RS:2;jenkins-hbase4:33649] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,33649,1690240221185 2023-07-24 23:10:22,943 DEBUG [RS:2;jenkins-hbase4:33649] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33649,1690240221185' 2023-07-24 23:10:22,943 DEBUG [RS:2;jenkins-hbase4:33649] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 23:10:22,943 DEBUG [RS:1;jenkins-hbase4:42429] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 23:10:22,944 INFO [RS:1;jenkins-hbase4:42429] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 23:10:22,944 INFO [RS:1;jenkins-hbase4:42429] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 23:10:22,944 DEBUG [RS:2;jenkins-hbase4:33649] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 23:10:22,944 DEBUG [RS:0;jenkins-hbase4:36981] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 23:10:22,944 DEBUG [RS:2;jenkins-hbase4:33649] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 23:10:22,945 INFO [RS:2;jenkins-hbase4:33649] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 23:10:22,945 INFO [RS:2;jenkins-hbase4:33649] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 23:10:22,947 DEBUG [RS:0;jenkins-hbase4:36981] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 23:10:22,947 INFO [RS:0;jenkins-hbase4:36981] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 23:10:22,947 INFO [RS:0;jenkins-hbase4:36981] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 23:10:23,057 INFO [RS:1;jenkins-hbase4:42429] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42429%2C1690240220974, suffix=, logDir=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/WALs/jenkins-hbase4.apache.org,42429,1690240220974, archiveDir=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/oldWALs, maxLogs=32 2023-07-24 23:10:23,057 INFO [RS:0;jenkins-hbase4:36981] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36981%2C1690240220580, suffix=, logDir=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/WALs/jenkins-hbase4.apache.org,36981,1690240220580, archiveDir=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/oldWALs, maxLogs=32 2023-07-24 23:10:23,063 INFO [RS:2;jenkins-hbase4:33649] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33649%2C1690240221185, suffix=, logDir=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/WALs/jenkins-hbase4.apache.org,33649,1690240221185, archiveDir=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/oldWALs, maxLogs=32 2023-07-24 23:10:23,063 DEBUG [jenkins-hbase4:42959] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-24 23:10:23,090 DEBUG [jenkins-hbase4:42959] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 23:10:23,092 DEBUG [jenkins-hbase4:42959] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 23:10:23,092 DEBUG [jenkins-hbase4:42959] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 23:10:23,092 DEBUG [jenkins-hbase4:42959] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 23:10:23,092 DEBUG [jenkins-hbase4:42959] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 23:10:23,106 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,33649,1690240221185, state=OPENING 2023-07-24 23:10:23,107 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46677,DS-99f991c9-beb0-41c1-9404-df7150cba31b,DISK] 2023-07-24 23:10:23,115 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39741,DS-8317c52f-8ef5-4f17-a0c2-afb6962c43fa,DISK] 2023-07-24 23:10:23,117 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46677,DS-99f991c9-beb0-41c1-9404-df7150cba31b,DISK] 2023-07-24 23:10:23,117 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46461,DS-ea853878-8ff0-4830-8e4e-e0b850d87b95,DISK] 2023-07-24 23:10:23,118 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46461,DS-ea853878-8ff0-4830-8e4e-e0b850d87b95,DISK] 2023-07-24 23:10:23,118 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46677,DS-99f991c9-beb0-41c1-9404-df7150cba31b,DISK] 2023-07-24 23:10:23,118 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39741,DS-8317c52f-8ef5-4f17-a0c2-afb6962c43fa,DISK] 2023-07-24 23:10:23,119 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39741,DS-8317c52f-8ef5-4f17-a0c2-afb6962c43fa,DISK] 2023-07-24 23:10:23,119 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46461,DS-ea853878-8ff0-4830-8e4e-e0b850d87b95,DISK] 2023-07-24 23:10:23,120 DEBUG [PEWorker-2] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-24 23:10:23,127 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 23:10:23,132 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 23:10:23,136 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,33649,1690240221185}] 2023-07-24 23:10:23,141 INFO [RS:0;jenkins-hbase4:36981] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/WALs/jenkins-hbase4.apache.org,36981,1690240220580/jenkins-hbase4.apache.org%2C36981%2C1690240220580.1690240223062 2023-07-24 23:10:23,141 DEBUG [RS:0;jenkins-hbase4:36981] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46677,DS-99f991c9-beb0-41c1-9404-df7150cba31b,DISK], DatanodeInfoWithStorage[127.0.0.1:46461,DS-ea853878-8ff0-4830-8e4e-e0b850d87b95,DISK], DatanodeInfoWithStorage[127.0.0.1:39741,DS-8317c52f-8ef5-4f17-a0c2-afb6962c43fa,DISK]] 2023-07-24 23:10:23,142 INFO [RS:2;jenkins-hbase4:33649] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/WALs/jenkins-hbase4.apache.org,33649,1690240221185/jenkins-hbase4.apache.org%2C33649%2C1690240221185.1690240223065 2023-07-24 23:10:23,144 DEBUG [RS:2;jenkins-hbase4:33649] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39741,DS-8317c52f-8ef5-4f17-a0c2-afb6962c43fa,DISK], DatanodeInfoWithStorage[127.0.0.1:46677,DS-99f991c9-beb0-41c1-9404-df7150cba31b,DISK], DatanodeInfoWithStorage[127.0.0.1:46461,DS-ea853878-8ff0-4830-8e4e-e0b850d87b95,DISK]] 2023-07-24 23:10:23,148 INFO [RS:1;jenkins-hbase4:42429] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/WALs/jenkins-hbase4.apache.org,42429,1690240220974/jenkins-hbase4.apache.org%2C42429%2C1690240220974.1690240223062 2023-07-24 23:10:23,150 DEBUG [RS:1;jenkins-hbase4:42429] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39741,DS-8317c52f-8ef5-4f17-a0c2-afb6962c43fa,DISK], DatanodeInfoWithStorage[127.0.0.1:46461,DS-ea853878-8ff0-4830-8e4e-e0b850d87b95,DISK], DatanodeInfoWithStorage[127.0.0.1:46677,DS-99f991c9-beb0-41c1-9404-df7150cba31b,DISK]] 2023-07-24 23:10:23,252 WARN [ReadOnlyZKClient-127.0.0.1:59310@0x5970ea93] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-24 23:10:23,294 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42959,1690240218606] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 23:10:23,298 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48058, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 23:10:23,299 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=33649] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:48058 deadline: 1690240283299, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,33649,1690240221185 2023-07-24 23:10:23,366 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,33649,1690240221185 2023-07-24 23:10:23,372 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 23:10:23,377 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48068, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 23:10:23,391 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-24 23:10:23,391 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 23:10:23,395 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33649%2C1690240221185.meta, suffix=.meta, logDir=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/WALs/jenkins-hbase4.apache.org,33649,1690240221185, archiveDir=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/oldWALs, maxLogs=32 2023-07-24 23:10:23,426 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46461,DS-ea853878-8ff0-4830-8e4e-e0b850d87b95,DISK] 2023-07-24 23:10:23,426 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39741,DS-8317c52f-8ef5-4f17-a0c2-afb6962c43fa,DISK] 2023-07-24 23:10:23,426 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46677,DS-99f991c9-beb0-41c1-9404-df7150cba31b,DISK] 2023-07-24 23:10:23,443 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/WALs/jenkins-hbase4.apache.org,33649,1690240221185/jenkins-hbase4.apache.org%2C33649%2C1690240221185.meta.1690240223396.meta 2023-07-24 23:10:23,444 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46461,DS-ea853878-8ff0-4830-8e4e-e0b850d87b95,DISK], DatanodeInfoWithStorage[127.0.0.1:39741,DS-8317c52f-8ef5-4f17-a0c2-afb6962c43fa,DISK], DatanodeInfoWithStorage[127.0.0.1:46677,DS-99f991c9-beb0-41c1-9404-df7150cba31b,DISK]] 2023-07-24 23:10:23,444 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-24 23:10:23,446 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 23:10:23,449 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-24 23:10:23,450 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-24 23:10:23,457 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-24 23:10:23,457 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:23,457 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-24 23:10:23,457 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-24 23:10:23,460 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-24 23:10:23,462 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/meta/1588230740/info 2023-07-24 23:10:23,462 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/meta/1588230740/info 2023-07-24 23:10:23,463 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-24 23:10:23,464 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:23,464 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-24 23:10:23,465 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/meta/1588230740/rep_barrier 2023-07-24 23:10:23,465 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/meta/1588230740/rep_barrier 2023-07-24 23:10:23,466 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-24 23:10:23,467 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:23,467 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-24 23:10:23,469 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/meta/1588230740/table 2023-07-24 23:10:23,469 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/meta/1588230740/table 2023-07-24 23:10:23,469 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-24 23:10:23,470 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:23,471 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/meta/1588230740 2023-07-24 23:10:23,475 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/meta/1588230740 2023-07-24 23:10:23,479 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-24 23:10:23,481 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-24 23:10:23,483 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11194722720, jitterRate=0.04258979856967926}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-24 23:10:23,483 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-24 23:10:23,499 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1690240223319 2023-07-24 23:10:23,529 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-24 23:10:23,530 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-24 23:10:23,531 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,33649,1690240221185, state=OPEN 2023-07-24 23:10:23,534 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-24 23:10:23,534 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 23:10:23,542 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-24 23:10:23,542 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,33649,1690240221185 in 398 msec 2023-07-24 23:10:23,550 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-24 23:10:23,550 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 658 msec 2023-07-24 23:10:23,557 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 1.0730 sec 2023-07-24 23:10:23,557 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1690240223557, completionTime=-1 2023-07-24 23:10:23,557 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-24 23:10:23,558 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-24 23:10:23,625 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-24 23:10:23,625 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1690240283625 2023-07-24 23:10:23,625 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1690240343625 2023-07-24 23:10:23,625 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 67 msec 2023-07-24 23:10:23,647 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42959,1690240218606-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:23,647 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42959,1690240218606-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:23,647 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42959,1690240218606-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:23,649 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:42959, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:23,650 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:23,660 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-24 23:10:23,672 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-24 23:10:23,674 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-24 23:10:23,686 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-24 23:10:23,689 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 23:10:23,693 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 23:10:23,715 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/hbase/namespace/c59756cef5ea3b9231917a64964f5e23 2023-07-24 23:10:23,719 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/hbase/namespace/c59756cef5ea3b9231917a64964f5e23 empty. 2023-07-24 23:10:23,720 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/hbase/namespace/c59756cef5ea3b9231917a64964f5e23 2023-07-24 23:10:23,720 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-24 23:10:23,781 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-24 23:10:23,783 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => c59756cef5ea3b9231917a64964f5e23, NAME => 'hbase:namespace,,1690240223673.c59756cef5ea3b9231917a64964f5e23.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp 2023-07-24 23:10:23,833 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42959,1690240218606] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 23:10:23,837 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42959,1690240218606] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-24 23:10:23,844 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690240223673.c59756cef5ea3b9231917a64964f5e23.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:23,845 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing c59756cef5ea3b9231917a64964f5e23, disabling compactions & flushes 2023-07-24 23:10:23,845 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690240223673.c59756cef5ea3b9231917a64964f5e23. 2023-07-24 23:10:23,845 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690240223673.c59756cef5ea3b9231917a64964f5e23. 2023-07-24 23:10:23,845 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690240223673.c59756cef5ea3b9231917a64964f5e23. after waiting 0 ms 2023-07-24 23:10:23,845 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690240223673.c59756cef5ea3b9231917a64964f5e23. 2023-07-24 23:10:23,845 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690240223673.c59756cef5ea3b9231917a64964f5e23. 2023-07-24 23:10:23,845 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for c59756cef5ea3b9231917a64964f5e23: 2023-07-24 23:10:23,849 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 23:10:23,849 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 23:10:23,851 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 23:10:23,855 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/hbase/rsgroup/386ba32f0c3b0408cdca5a4ed5ced8e4 2023-07-24 23:10:23,856 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/hbase/rsgroup/386ba32f0c3b0408cdca5a4ed5ced8e4 empty. 2023-07-24 23:10:23,863 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/hbase/rsgroup/386ba32f0c3b0408cdca5a4ed5ced8e4 2023-07-24 23:10:23,864 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-24 23:10:23,876 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1690240223673.c59756cef5ea3b9231917a64964f5e23.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690240223853"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240223853"}]},"ts":"1690240223853"} 2023-07-24 23:10:23,920 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-24 23:10:23,923 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 386ba32f0c3b0408cdca5a4ed5ced8e4, NAME => 'hbase:rsgroup,,1690240223833.386ba32f0c3b0408cdca5a4ed5ced8e4.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp 2023-07-24 23:10:23,923 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 23:10:23,936 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 23:10:23,947 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690240223937"}]},"ts":"1690240223937"} 2023-07-24 23:10:23,961 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-24 23:10:23,966 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690240223833.386ba32f0c3b0408cdca5a4ed5ced8e4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:23,966 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 386ba32f0c3b0408cdca5a4ed5ced8e4, disabling compactions & flushes 2023-07-24 23:10:23,966 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690240223833.386ba32f0c3b0408cdca5a4ed5ced8e4. 2023-07-24 23:10:23,966 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690240223833.386ba32f0c3b0408cdca5a4ed5ced8e4. 2023-07-24 23:10:23,966 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690240223833.386ba32f0c3b0408cdca5a4ed5ced8e4. after waiting 0 ms 2023-07-24 23:10:23,967 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690240223833.386ba32f0c3b0408cdca5a4ed5ced8e4. 2023-07-24 23:10:23,967 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690240223833.386ba32f0c3b0408cdca5a4ed5ced8e4. 2023-07-24 23:10:23,967 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 386ba32f0c3b0408cdca5a4ed5ced8e4: 2023-07-24 23:10:23,968 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 23:10:23,969 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 23:10:23,969 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 23:10:23,969 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 23:10:23,969 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 23:10:23,971 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=c59756cef5ea3b9231917a64964f5e23, ASSIGN}] 2023-07-24 23:10:23,972 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 23:10:23,973 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1690240223833.386ba32f0c3b0408cdca5a4ed5ced8e4.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690240223973"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240223973"}]},"ts":"1690240223973"} 2023-07-24 23:10:23,974 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=c59756cef5ea3b9231917a64964f5e23, ASSIGN 2023-07-24 23:10:23,978 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=c59756cef5ea3b9231917a64964f5e23, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42429,1690240220974; forceNewPlan=false, retain=false 2023-07-24 23:10:23,979 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 23:10:23,981 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 23:10:23,982 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690240223981"}]},"ts":"1690240223981"} 2023-07-24 23:10:23,985 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-24 23:10:23,990 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 23:10:23,990 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 23:10:23,990 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 23:10:23,990 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 23:10:23,990 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 23:10:23,990 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=386ba32f0c3b0408cdca5a4ed5ced8e4, ASSIGN}] 2023-07-24 23:10:23,995 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=386ba32f0c3b0408cdca5a4ed5ced8e4, ASSIGN 2023-07-24 23:10:23,998 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=386ba32f0c3b0408cdca5a4ed5ced8e4, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42429,1690240220974; forceNewPlan=false, retain=false 2023-07-24 23:10:23,999 INFO [jenkins-hbase4:42959] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-24 23:10:24,002 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=c59756cef5ea3b9231917a64964f5e23, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42429,1690240220974 2023-07-24 23:10:24,002 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690240223673.c59756cef5ea3b9231917a64964f5e23.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690240224001"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240224001"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240224001"}]},"ts":"1690240224001"} 2023-07-24 23:10:24,002 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=386ba32f0c3b0408cdca5a4ed5ced8e4, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42429,1690240220974 2023-07-24 23:10:24,002 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690240223833.386ba32f0c3b0408cdca5a4ed5ced8e4.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690240224002"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240224002"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240224002"}]},"ts":"1690240224002"} 2023-07-24 23:10:24,005 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE; OpenRegionProcedure c59756cef5ea3b9231917a64964f5e23, server=jenkins-hbase4.apache.org,42429,1690240220974}] 2023-07-24 23:10:24,008 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 386ba32f0c3b0408cdca5a4ed5ced8e4, server=jenkins-hbase4.apache.org,42429,1690240220974}] 2023-07-24 23:10:24,159 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,42429,1690240220974 2023-07-24 23:10:24,160 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 23:10:24,164 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54526, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 23:10:24,170 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1690240223833.386ba32f0c3b0408cdca5a4ed5ced8e4. 2023-07-24 23:10:24,170 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 386ba32f0c3b0408cdca5a4ed5ced8e4, NAME => 'hbase:rsgroup,,1690240223833.386ba32f0c3b0408cdca5a4ed5ced8e4.', STARTKEY => '', ENDKEY => ''} 2023-07-24 23:10:24,170 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 23:10:24,170 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1690240223833.386ba32f0c3b0408cdca5a4ed5ced8e4. service=MultiRowMutationService 2023-07-24 23:10:24,171 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-24 23:10:24,172 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 386ba32f0c3b0408cdca5a4ed5ced8e4 2023-07-24 23:10:24,172 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690240223833.386ba32f0c3b0408cdca5a4ed5ced8e4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:24,172 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 386ba32f0c3b0408cdca5a4ed5ced8e4 2023-07-24 23:10:24,172 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 386ba32f0c3b0408cdca5a4ed5ced8e4 2023-07-24 23:10:24,175 INFO [StoreOpener-386ba32f0c3b0408cdca5a4ed5ced8e4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 386ba32f0c3b0408cdca5a4ed5ced8e4 2023-07-24 23:10:24,177 DEBUG [StoreOpener-386ba32f0c3b0408cdca5a4ed5ced8e4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/rsgroup/386ba32f0c3b0408cdca5a4ed5ced8e4/m 2023-07-24 23:10:24,177 DEBUG [StoreOpener-386ba32f0c3b0408cdca5a4ed5ced8e4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/rsgroup/386ba32f0c3b0408cdca5a4ed5ced8e4/m 2023-07-24 23:10:24,178 INFO [StoreOpener-386ba32f0c3b0408cdca5a4ed5ced8e4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 386ba32f0c3b0408cdca5a4ed5ced8e4 columnFamilyName m 2023-07-24 23:10:24,179 INFO [StoreOpener-386ba32f0c3b0408cdca5a4ed5ced8e4-1] regionserver.HStore(310): Store=386ba32f0c3b0408cdca5a4ed5ced8e4/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:24,181 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/rsgroup/386ba32f0c3b0408cdca5a4ed5ced8e4 2023-07-24 23:10:24,182 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/rsgroup/386ba32f0c3b0408cdca5a4ed5ced8e4 2023-07-24 23:10:24,188 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 386ba32f0c3b0408cdca5a4ed5ced8e4 2023-07-24 23:10:24,192 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/rsgroup/386ba32f0c3b0408cdca5a4ed5ced8e4/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 23:10:24,192 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 386ba32f0c3b0408cdca5a4ed5ced8e4; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@1701f9ba, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 23:10:24,193 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 386ba32f0c3b0408cdca5a4ed5ced8e4: 2023-07-24 23:10:24,194 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1690240223833.386ba32f0c3b0408cdca5a4ed5ced8e4., pid=9, masterSystemTime=1690240224159 2023-07-24 23:10:24,199 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1690240223833.386ba32f0c3b0408cdca5a4ed5ced8e4. 2023-07-24 23:10:24,199 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1690240223833.386ba32f0c3b0408cdca5a4ed5ced8e4. 2023-07-24 23:10:24,200 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1690240223673.c59756cef5ea3b9231917a64964f5e23. 2023-07-24 23:10:24,200 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c59756cef5ea3b9231917a64964f5e23, NAME => 'hbase:namespace,,1690240223673.c59756cef5ea3b9231917a64964f5e23.', STARTKEY => '', ENDKEY => ''} 2023-07-24 23:10:24,200 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace c59756cef5ea3b9231917a64964f5e23 2023-07-24 23:10:24,200 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690240223673.c59756cef5ea3b9231917a64964f5e23.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:24,200 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c59756cef5ea3b9231917a64964f5e23 2023-07-24 23:10:24,200 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c59756cef5ea3b9231917a64964f5e23 2023-07-24 23:10:24,201 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=386ba32f0c3b0408cdca5a4ed5ced8e4, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42429,1690240220974 2023-07-24 23:10:24,201 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1690240223833.386ba32f0c3b0408cdca5a4ed5ced8e4.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690240224200"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690240224200"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690240224200"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690240224200"}]},"ts":"1690240224200"} 2023-07-24 23:10:24,203 INFO [StoreOpener-c59756cef5ea3b9231917a64964f5e23-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region c59756cef5ea3b9231917a64964f5e23 2023-07-24 23:10:24,205 DEBUG [StoreOpener-c59756cef5ea3b9231917a64964f5e23-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/namespace/c59756cef5ea3b9231917a64964f5e23/info 2023-07-24 23:10:24,206 DEBUG [StoreOpener-c59756cef5ea3b9231917a64964f5e23-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/namespace/c59756cef5ea3b9231917a64964f5e23/info 2023-07-24 23:10:24,206 INFO [StoreOpener-c59756cef5ea3b9231917a64964f5e23-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c59756cef5ea3b9231917a64964f5e23 columnFamilyName info 2023-07-24 23:10:24,207 INFO [StoreOpener-c59756cef5ea3b9231917a64964f5e23-1] regionserver.HStore(310): Store=c59756cef5ea3b9231917a64964f5e23/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:24,210 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/namespace/c59756cef5ea3b9231917a64964f5e23 2023-07-24 23:10:24,216 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/namespace/c59756cef5ea3b9231917a64964f5e23 2023-07-24 23:10:24,216 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-24 23:10:24,216 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 386ba32f0c3b0408cdca5a4ed5ced8e4, server=jenkins-hbase4.apache.org,42429,1690240220974 in 197 msec 2023-07-24 23:10:24,220 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-24 23:10:24,221 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=386ba32f0c3b0408cdca5a4ed5ced8e4, ASSIGN in 226 msec 2023-07-24 23:10:24,221 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c59756cef5ea3b9231917a64964f5e23 2023-07-24 23:10:24,223 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 23:10:24,223 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690240224223"}]},"ts":"1690240224223"} 2023-07-24 23:10:24,225 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/namespace/c59756cef5ea3b9231917a64964f5e23/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 23:10:24,226 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-24 23:10:24,226 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c59756cef5ea3b9231917a64964f5e23; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10164976800, jitterRate=-0.0533127635717392}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 23:10:24,226 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c59756cef5ea3b9231917a64964f5e23: 2023-07-24 23:10:24,230 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 23:10:24,231 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1690240223673.c59756cef5ea3b9231917a64964f5e23., pid=8, masterSystemTime=1690240224159 2023-07-24 23:10:24,233 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1690240223673.c59756cef5ea3b9231917a64964f5e23. 2023-07-24 23:10:24,233 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1690240223673.c59756cef5ea3b9231917a64964f5e23. 2023-07-24 23:10:24,234 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 397 msec 2023-07-24 23:10:24,235 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=c59756cef5ea3b9231917a64964f5e23, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42429,1690240220974 2023-07-24 23:10:24,235 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1690240223673.c59756cef5ea3b9231917a64964f5e23.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690240224234"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690240224234"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690240224234"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690240224234"}]},"ts":"1690240224234"} 2023-07-24 23:10:24,243 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-24 23:10:24,243 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; OpenRegionProcedure c59756cef5ea3b9231917a64964f5e23, server=jenkins-hbase4.apache.org,42429,1690240220974 in 233 msec 2023-07-24 23:10:24,249 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=4 2023-07-24 23:10:24,250 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=c59756cef5ea3b9231917a64964f5e23, ASSIGN in 272 msec 2023-07-24 23:10:24,254 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 23:10:24,254 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690240224254"}]},"ts":"1690240224254"} 2023-07-24 23:10:24,257 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-24 23:10:24,265 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 23:10:24,268 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42959,1690240218606] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 23:10:24,271 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 590 msec 2023-07-24 23:10:24,271 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54528, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 23:10:24,274 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42959,1690240218606] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-24 23:10:24,275 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42959,1690240218606] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-24 23:10:24,290 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-24 23:10:24,292 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-24 23:10:24,292 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 23:10:24,317 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-24 23:10:24,336 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 23:10:24,342 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 35 msec 2023-07-24 23:10:24,351 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-24 23:10:24,363 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 23:10:24,363 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42959,1690240218606] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:24,372 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42959,1690240218606] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-24 23:10:24,375 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 23:10:24,380 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42959,1690240218606] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-24 23:10:24,384 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 32 msec 2023-07-24 23:10:24,398 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-24 23:10:24,402 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-24 23:10:24,402 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 2.956sec 2023-07-24 23:10:24,405 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-24 23:10:24,407 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-24 23:10:24,407 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-24 23:10:24,409 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42959,1690240218606-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-24 23:10:24,410 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42959,1690240218606-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-24 23:10:24,419 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-24 23:10:24,500 DEBUG [Listener at localhost/39785] zookeeper.ReadOnlyZKClient(139): Connect 0x52c3922c to 127.0.0.1:59310 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 23:10:24,506 DEBUG [Listener at localhost/39785] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@48b9c2bd, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 23:10:24,524 DEBUG [hconnection-0x45c34053-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 23:10:24,537 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48084, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 23:10:24,548 INFO [Listener at localhost/39785] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,42959,1690240218606 2023-07-24 23:10:24,549 INFO [Listener at localhost/39785] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:24,558 DEBUG [Listener at localhost/39785] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-24 23:10:24,562 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34864, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-24 23:10:24,577 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-24 23:10:24,577 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 23:10:24,579 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-24 23:10:24,584 DEBUG [Listener at localhost/39785] zookeeper.ReadOnlyZKClient(139): Connect 0x493f5df2 to 127.0.0.1:59310 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 23:10:24,589 DEBUG [Listener at localhost/39785] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@67598398, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 23:10:24,589 INFO [Listener at localhost/39785] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:59310 2023-07-24 23:10:24,594 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 23:10:24,602 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1019999755d000a connected 2023-07-24 23:10:24,637 INFO [Listener at localhost/39785] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=420, OpenFileDescriptor=673, MaxFileDescriptor=60000, SystemLoadAverage=413, ProcessCount=177, AvailableMemoryMB=6659 2023-07-24 23:10:24,640 INFO [Listener at localhost/39785] rsgroup.TestRSGroupsBase(132): testTableMoveTruncateAndDrop 2023-07-24 23:10:24,670 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:24,672 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:24,718 INFO [Listener at localhost/39785] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-24 23:10:24,735 INFO [Listener at localhost/39785] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 23:10:24,736 INFO [Listener at localhost/39785] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 23:10:24,736 INFO [Listener at localhost/39785] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 23:10:24,736 INFO [Listener at localhost/39785] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 23:10:24,736 INFO [Listener at localhost/39785] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 23:10:24,736 INFO [Listener at localhost/39785] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 23:10:24,736 INFO [Listener at localhost/39785] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 23:10:24,741 INFO [Listener at localhost/39785] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46215 2023-07-24 23:10:24,742 INFO [Listener at localhost/39785] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 23:10:24,743 DEBUG [Listener at localhost/39785] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 23:10:24,745 INFO [Listener at localhost/39785] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 23:10:24,748 INFO [Listener at localhost/39785] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 23:10:24,751 INFO [Listener at localhost/39785] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46215 connecting to ZooKeeper ensemble=127.0.0.1:59310 2023-07-24 23:10:24,760 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): regionserver:462150x0, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 23:10:24,761 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46215-0x1019999755d000b connected 2023-07-24 23:10:24,762 DEBUG [Listener at localhost/39785] zookeeper.ZKUtil(162): regionserver:46215-0x1019999755d000b, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 23:10:24,763 DEBUG [Listener at localhost/39785] zookeeper.ZKUtil(162): regionserver:46215-0x1019999755d000b, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-24 23:10:24,764 DEBUG [Listener at localhost/39785] zookeeper.ZKUtil(164): regionserver:46215-0x1019999755d000b, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 23:10:24,766 DEBUG [Listener at localhost/39785] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46215 2023-07-24 23:10:24,770 DEBUG [Listener at localhost/39785] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46215 2023-07-24 23:10:24,771 DEBUG [Listener at localhost/39785] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46215 2023-07-24 23:10:24,771 DEBUG [Listener at localhost/39785] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46215 2023-07-24 23:10:24,773 DEBUG [Listener at localhost/39785] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46215 2023-07-24 23:10:24,775 INFO [Listener at localhost/39785] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 23:10:24,776 INFO [Listener at localhost/39785] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 23:10:24,776 INFO [Listener at localhost/39785] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 23:10:24,776 INFO [Listener at localhost/39785] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 23:10:24,777 INFO [Listener at localhost/39785] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 23:10:24,777 INFO [Listener at localhost/39785] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 23:10:24,777 INFO [Listener at localhost/39785] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 23:10:24,777 INFO [Listener at localhost/39785] http.HttpServer(1146): Jetty bound to port 33289 2023-07-24 23:10:24,777 INFO [Listener at localhost/39785] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 23:10:24,781 INFO [Listener at localhost/39785] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 23:10:24,781 INFO [Listener at localhost/39785] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2b476413{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/hadoop.log.dir/,AVAILABLE} 2023-07-24 23:10:24,781 INFO [Listener at localhost/39785] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 23:10:24,782 INFO [Listener at localhost/39785] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2ebea3f1{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 23:10:24,928 INFO [Listener at localhost/39785] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 23:10:24,930 INFO [Listener at localhost/39785] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 23:10:24,930 INFO [Listener at localhost/39785] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 23:10:24,930 INFO [Listener at localhost/39785] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 23:10:24,932 INFO [Listener at localhost/39785] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 23:10:24,933 INFO [Listener at localhost/39785] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4783a073{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/java.io.tmpdir/jetty-0_0_0_0-33289-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8586384398608756829/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 23:10:24,935 INFO [Listener at localhost/39785] server.AbstractConnector(333): Started ServerConnector@143940fe{HTTP/1.1, (http/1.1)}{0.0.0.0:33289} 2023-07-24 23:10:24,935 INFO [Listener at localhost/39785] server.Server(415): Started @12239ms 2023-07-24 23:10:24,942 INFO [RS:3;jenkins-hbase4:46215] regionserver.HRegionServer(951): ClusterId : 84747357-cecf-4454-93dc-a1cdf648adda 2023-07-24 23:10:24,942 DEBUG [RS:3;jenkins-hbase4:46215] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 23:10:24,946 DEBUG [RS:3;jenkins-hbase4:46215] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 23:10:24,946 DEBUG [RS:3;jenkins-hbase4:46215] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 23:10:24,949 DEBUG [RS:3;jenkins-hbase4:46215] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 23:10:24,950 DEBUG [RS:3;jenkins-hbase4:46215] zookeeper.ReadOnlyZKClient(139): Connect 0x1c443cd9 to 127.0.0.1:59310 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 23:10:24,984 DEBUG [RS:3;jenkins-hbase4:46215] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1283c121, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 23:10:24,984 DEBUG [RS:3;jenkins-hbase4:46215] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@16bc57a9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 23:10:25,001 DEBUG [RS:3;jenkins-hbase4:46215] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:46215 2023-07-24 23:10:25,001 INFO [RS:3;jenkins-hbase4:46215] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 23:10:25,001 INFO [RS:3;jenkins-hbase4:46215] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 23:10:25,001 DEBUG [RS:3;jenkins-hbase4:46215] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 23:10:25,003 INFO [RS:3;jenkins-hbase4:46215] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42959,1690240218606 with isa=jenkins-hbase4.apache.org/172.31.14.131:46215, startcode=1690240224735 2023-07-24 23:10:25,003 DEBUG [RS:3;jenkins-hbase4:46215] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 23:10:25,013 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50131, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 23:10:25,013 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42959] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,46215,1690240224735 2023-07-24 23:10:25,013 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42959,1690240218606] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 23:10:25,014 DEBUG [RS:3;jenkins-hbase4:46215] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c 2023-07-24 23:10:25,014 DEBUG [RS:3;jenkins-hbase4:46215] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:38733 2023-07-24 23:10:25,014 DEBUG [RS:3;jenkins-hbase4:46215] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=46533 2023-07-24 23:10:25,020 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 23:10:25,020 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): regionserver:42429-0x1019999755d0002, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 23:10:25,021 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): regionserver:36981-0x1019999755d0001, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 23:10:25,021 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): regionserver:33649-0x1019999755d0003, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 23:10:25,021 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42959,1690240218606] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:25,021 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,46215,1690240224735] 2023-07-24 23:10:25,022 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42959,1690240218606] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-24 23:10:25,023 DEBUG [RS:3;jenkins-hbase4:46215] zookeeper.ZKUtil(162): regionserver:46215-0x1019999755d000b, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46215,1690240224735 2023-07-24 23:10:25,023 WARN [RS:3;jenkins-hbase4:46215] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 23:10:25,023 INFO [RS:3;jenkins-hbase4:46215] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 23:10:25,023 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42429-0x1019999755d0002, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36981,1690240220580 2023-07-24 23:10:25,023 DEBUG [RS:3;jenkins-hbase4:46215] regionserver.HRegionServer(1948): logDir=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/WALs/jenkins-hbase4.apache.org,46215,1690240224735 2023-07-24 23:10:25,030 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33649-0x1019999755d0003, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36981,1690240220580 2023-07-24 23:10:25,030 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42429-0x1019999755d0002, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33649,1690240221185 2023-07-24 23:10:25,030 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42959,1690240218606] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-24 23:10:25,030 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36981-0x1019999755d0001, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36981,1690240220580 2023-07-24 23:10:25,031 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33649-0x1019999755d0003, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33649,1690240221185 2023-07-24 23:10:25,032 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42429-0x1019999755d0002, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42429,1690240220974 2023-07-24 23:10:25,032 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36981-0x1019999755d0001, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33649,1690240221185 2023-07-24 23:10:25,032 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33649-0x1019999755d0003, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42429,1690240220974 2023-07-24 23:10:25,032 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42429-0x1019999755d0002, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46215,1690240224735 2023-07-24 23:10:25,038 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36981-0x1019999755d0001, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42429,1690240220974 2023-07-24 23:10:25,038 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33649-0x1019999755d0003, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46215,1690240224735 2023-07-24 23:10:25,038 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36981-0x1019999755d0001, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46215,1690240224735 2023-07-24 23:10:25,040 DEBUG [RS:3;jenkins-hbase4:46215] zookeeper.ZKUtil(162): regionserver:46215-0x1019999755d000b, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36981,1690240220580 2023-07-24 23:10:25,041 DEBUG [RS:3;jenkins-hbase4:46215] zookeeper.ZKUtil(162): regionserver:46215-0x1019999755d000b, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33649,1690240221185 2023-07-24 23:10:25,041 DEBUG [RS:3;jenkins-hbase4:46215] zookeeper.ZKUtil(162): regionserver:46215-0x1019999755d000b, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42429,1690240220974 2023-07-24 23:10:25,042 DEBUG [RS:3;jenkins-hbase4:46215] zookeeper.ZKUtil(162): regionserver:46215-0x1019999755d000b, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46215,1690240224735 2023-07-24 23:10:25,044 DEBUG [RS:3;jenkins-hbase4:46215] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 23:10:25,044 INFO [RS:3;jenkins-hbase4:46215] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 23:10:25,047 INFO [RS:3;jenkins-hbase4:46215] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 23:10:25,048 INFO [RS:3;jenkins-hbase4:46215] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 23:10:25,048 INFO [RS:3;jenkins-hbase4:46215] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:25,048 INFO [RS:3;jenkins-hbase4:46215] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 23:10:25,052 INFO [RS:3;jenkins-hbase4:46215] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:25,052 DEBUG [RS:3;jenkins-hbase4:46215] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:25,052 DEBUG [RS:3;jenkins-hbase4:46215] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:25,052 DEBUG [RS:3;jenkins-hbase4:46215] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:25,052 DEBUG [RS:3;jenkins-hbase4:46215] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:25,052 DEBUG [RS:3;jenkins-hbase4:46215] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:25,053 DEBUG [RS:3;jenkins-hbase4:46215] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 23:10:25,053 DEBUG [RS:3;jenkins-hbase4:46215] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:25,053 DEBUG [RS:3;jenkins-hbase4:46215] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:25,053 DEBUG [RS:3;jenkins-hbase4:46215] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:25,053 DEBUG [RS:3;jenkins-hbase4:46215] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:25,057 INFO [RS:3;jenkins-hbase4:46215] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:25,057 INFO [RS:3;jenkins-hbase4:46215] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:25,057 INFO [RS:3;jenkins-hbase4:46215] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:25,080 INFO [RS:3;jenkins-hbase4:46215] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 23:10:25,080 INFO [RS:3;jenkins-hbase4:46215] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46215,1690240224735-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:25,097 INFO [RS:3;jenkins-hbase4:46215] regionserver.Replication(203): jenkins-hbase4.apache.org,46215,1690240224735 started 2023-07-24 23:10:25,097 INFO [RS:3;jenkins-hbase4:46215] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,46215,1690240224735, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:46215, sessionid=0x1019999755d000b 2023-07-24 23:10:25,097 DEBUG [RS:3;jenkins-hbase4:46215] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 23:10:25,097 DEBUG [RS:3;jenkins-hbase4:46215] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,46215,1690240224735 2023-07-24 23:10:25,097 DEBUG [RS:3;jenkins-hbase4:46215] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46215,1690240224735' 2023-07-24 23:10:25,097 DEBUG [RS:3;jenkins-hbase4:46215] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 23:10:25,098 DEBUG [RS:3;jenkins-hbase4:46215] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 23:10:25,099 DEBUG [RS:3;jenkins-hbase4:46215] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 23:10:25,099 DEBUG [RS:3;jenkins-hbase4:46215] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 23:10:25,099 DEBUG [RS:3;jenkins-hbase4:46215] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,46215,1690240224735 2023-07-24 23:10:25,099 DEBUG [RS:3;jenkins-hbase4:46215] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46215,1690240224735' 2023-07-24 23:10:25,099 DEBUG [RS:3;jenkins-hbase4:46215] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 23:10:25,101 DEBUG [RS:3;jenkins-hbase4:46215] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 23:10:25,102 DEBUG [RS:3;jenkins-hbase4:46215] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 23:10:25,102 INFO [RS:3;jenkins-hbase4:46215] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 23:10:25,102 INFO [RS:3;jenkins-hbase4:46215] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 23:10:25,104 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 23:10:25,109 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:25,110 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:25,112 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 23:10:25,115 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 23:10:25,118 DEBUG [hconnection-0x63581179-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 23:10:25,121 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48086, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 23:10:25,126 DEBUG [hconnection-0x63581179-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 23:10:25,129 INFO [RS-EventLoopGroup-4-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54536, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 23:10:25,132 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:25,132 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:25,143 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42959] to rsgroup master 2023-07-24 23:10:25,143 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 23:10:25,143 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:34864 deadline: 1690241425141, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. 2023-07-24 23:10:25,144 WARN [Listener at localhost/39785] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 23:10:25,146 INFO [Listener at localhost/39785] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:25,147 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:25,148 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:25,149 INFO [Listener at localhost/39785] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33649, jenkins-hbase4.apache.org:36981, jenkins-hbase4.apache.org:42429, jenkins-hbase4.apache.org:46215], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 23:10:25,155 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 23:10:25,155 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:25,158 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 23:10:25,158 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:25,160 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testTableMoveTruncateAndDrop_1730139615 2023-07-24 23:10:25,164 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:25,165 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:25,165 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1730139615 2023-07-24 23:10:25,168 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 23:10:25,173 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 23:10:25,178 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:25,178 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:25,184 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33649, jenkins-hbase4.apache.org:36981] to rsgroup Group_testTableMoveTruncateAndDrop_1730139615 2023-07-24 23:10:25,188 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:25,189 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:25,190 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1730139615 2023-07-24 23:10:25,190 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 23:10:25,194 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_1730139615 2023-07-24 23:10:25,197 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-24 23:10:25,198 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-24 23:10:25,199 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-24 23:10:25,199 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,33649,1690240221185, state=CLOSING 2023-07-24 23:10:25,201 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-24 23:10:25,201 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 23:10:25,201 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,33649,1690240221185}] 2023-07-24 23:10:25,209 INFO [RS:3;jenkins-hbase4:46215] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46215%2C1690240224735, suffix=, logDir=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/WALs/jenkins-hbase4.apache.org,46215,1690240224735, archiveDir=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/oldWALs, maxLogs=32 2023-07-24 23:10:25,233 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46461,DS-ea853878-8ff0-4830-8e4e-e0b850d87b95,DISK] 2023-07-24 23:10:25,234 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46677,DS-99f991c9-beb0-41c1-9404-df7150cba31b,DISK] 2023-07-24 23:10:25,237 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39741,DS-8317c52f-8ef5-4f17-a0c2-afb6962c43fa,DISK] 2023-07-24 23:10:25,243 INFO [RS:3;jenkins-hbase4:46215] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/WALs/jenkins-hbase4.apache.org,46215,1690240224735/jenkins-hbase4.apache.org%2C46215%2C1690240224735.1690240225210 2023-07-24 23:10:25,246 DEBUG [RS:3;jenkins-hbase4:46215] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46461,DS-ea853878-8ff0-4830-8e4e-e0b850d87b95,DISK], DatanodeInfoWithStorage[127.0.0.1:39741,DS-8317c52f-8ef5-4f17-a0c2-afb6962c43fa,DISK], DatanodeInfoWithStorage[127.0.0.1:46677,DS-99f991c9-beb0-41c1-9404-df7150cba31b,DISK]] 2023-07-24 23:10:25,367 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-24 23:10:25,368 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-24 23:10:25,368 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-24 23:10:25,368 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-24 23:10:25,368 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-24 23:10:25,368 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-24 23:10:25,369 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.49 KB heapSize=5 KB 2023-07-24 23:10:25,473 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.31 KB at sequenceid=14 (bloomFilter=false), to=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/meta/1588230740/.tmp/info/e0673bd20526432f9dee0d9515c03e04 2023-07-24 23:10:25,601 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=184 B at sequenceid=14 (bloomFilter=false), to=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/meta/1588230740/.tmp/table/dc6f6af16d614925aacad24c0417c936 2023-07-24 23:10:25,612 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/meta/1588230740/.tmp/info/e0673bd20526432f9dee0d9515c03e04 as hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/meta/1588230740/info/e0673bd20526432f9dee0d9515c03e04 2023-07-24 23:10:25,624 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/meta/1588230740/info/e0673bd20526432f9dee0d9515c03e04, entries=20, sequenceid=14, filesize=7.0 K 2023-07-24 23:10:25,628 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/meta/1588230740/.tmp/table/dc6f6af16d614925aacad24c0417c936 as hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/meta/1588230740/table/dc6f6af16d614925aacad24c0417c936 2023-07-24 23:10:25,666 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/meta/1588230740/table/dc6f6af16d614925aacad24c0417c936, entries=4, sequenceid=14, filesize=4.8 K 2023-07-24 23:10:25,670 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~2.49 KB/2550, heapSize ~4.72 KB/4832, currentSize=0 B/0 for 1588230740 in 301ms, sequenceid=14, compaction requested=false 2023-07-24 23:10:25,671 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-24 23:10:25,708 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/meta/1588230740/recovered.edits/17.seqid, newMaxSeqId=17, maxSeqId=1 2023-07-24 23:10:25,710 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 23:10:25,711 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-24 23:10:25,711 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-24 23:10:25,711 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase4.apache.org,46215,1690240224735 record at close sequenceid=14 2023-07-24 23:10:25,714 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-24 23:10:25,715 WARN [PEWorker-1] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-24 23:10:25,719 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-24 23:10:25,719 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,33649,1690240221185 in 514 msec 2023-07-24 23:10:25,722 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,46215,1690240224735; forceNewPlan=false, retain=false 2023-07-24 23:10:25,873 INFO [jenkins-hbase4:42959] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 23:10:25,873 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46215,1690240224735, state=OPENING 2023-07-24 23:10:25,876 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-24 23:10:25,876 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 23:10:25,876 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=12, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46215,1690240224735}] 2023-07-24 23:10:26,030 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,46215,1690240224735 2023-07-24 23:10:26,031 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 23:10:26,034 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39968, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 23:10:26,039 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-24 23:10:26,039 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 23:10:26,041 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46215%2C1690240224735.meta, suffix=.meta, logDir=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/WALs/jenkins-hbase4.apache.org,46215,1690240224735, archiveDir=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/oldWALs, maxLogs=32 2023-07-24 23:10:26,059 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46677,DS-99f991c9-beb0-41c1-9404-df7150cba31b,DISK] 2023-07-24 23:10:26,062 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39741,DS-8317c52f-8ef5-4f17-a0c2-afb6962c43fa,DISK] 2023-07-24 23:10:26,063 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46461,DS-ea853878-8ff0-4830-8e4e-e0b850d87b95,DISK] 2023-07-24 23:10:26,069 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/WALs/jenkins-hbase4.apache.org,46215,1690240224735/jenkins-hbase4.apache.org%2C46215%2C1690240224735.meta.1690240226042.meta 2023-07-24 23:10:26,070 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46677,DS-99f991c9-beb0-41c1-9404-df7150cba31b,DISK], DatanodeInfoWithStorage[127.0.0.1:39741,DS-8317c52f-8ef5-4f17-a0c2-afb6962c43fa,DISK], DatanodeInfoWithStorage[127.0.0.1:46461,DS-ea853878-8ff0-4830-8e4e-e0b850d87b95,DISK]] 2023-07-24 23:10:26,070 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-24 23:10:26,070 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 23:10:26,070 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-24 23:10:26,070 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-24 23:10:26,070 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-24 23:10:26,070 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:26,071 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-24 23:10:26,071 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-24 23:10:26,073 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-24 23:10:26,074 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/meta/1588230740/info 2023-07-24 23:10:26,074 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/meta/1588230740/info 2023-07-24 23:10:26,075 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-24 23:10:26,095 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/meta/1588230740/info/e0673bd20526432f9dee0d9515c03e04 2023-07-24 23:10:26,096 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:26,096 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-24 23:10:26,097 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/meta/1588230740/rep_barrier 2023-07-24 23:10:26,098 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/meta/1588230740/rep_barrier 2023-07-24 23:10:26,098 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-24 23:10:26,099 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:26,099 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-24 23:10:26,100 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/meta/1588230740/table 2023-07-24 23:10:26,100 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/meta/1588230740/table 2023-07-24 23:10:26,100 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-24 23:10:26,112 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/meta/1588230740/table/dc6f6af16d614925aacad24c0417c936 2023-07-24 23:10:26,112 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:26,113 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/meta/1588230740 2023-07-24 23:10:26,116 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/meta/1588230740 2023-07-24 23:10:26,120 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-24 23:10:26,123 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-24 23:10:26,124 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=18; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10995166400, jitterRate=0.02400466799736023}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-24 23:10:26,124 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-24 23:10:26,125 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=14, masterSystemTime=1690240226030 2023-07-24 23:10:26,129 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-24 23:10:26,130 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-24 23:10:26,131 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46215,1690240224735, state=OPEN 2023-07-24 23:10:26,132 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-24 23:10:26,132 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 23:10:26,139 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=12 2023-07-24 23:10:26,139 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46215,1690240224735 in 256 msec 2023-07-24 23:10:26,141 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 944 msec 2023-07-24 23:10:26,199 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] procedure.ProcedureSyncWait(216): waitFor pid=12 2023-07-24 23:10:26,200 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33649,1690240221185, jenkins-hbase4.apache.org,36981,1690240220580] are moved back to default 2023-07-24 23:10:26,200 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testTableMoveTruncateAndDrop_1730139615 2023-07-24 23:10:26,200 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 23:10:26,204 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:26,204 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:26,208 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1730139615 2023-07-24 23:10:26,208 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:26,221 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 23:10:26,223 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-24 23:10:26,226 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 23:10:26,233 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testTableMoveTruncateAndDrop" procId is: 15 2023-07-24 23:10:26,234 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:26,238 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:26,238 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1730139615 2023-07-24 23:10:26,239 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 23:10:26,246 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-24 23:10:26,249 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 23:10:26,251 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=33649] ipc.CallRunner(144): callId: 39 service: ClientService methodName: Get size: 151 connection: 172.31.14.131:48058 deadline: 1690240286251, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=46215 startCode=1690240224735. As of locationSeqNum=14. 2023-07-24 23:10:26,352 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-24 23:10:26,353 DEBUG [PEWorker-3] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 23:10:26,354 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39974, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 23:10:26,370 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a5773ebe0b9e5a7db3b6a0ead423c767 2023-07-24 23:10:26,371 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5169c491999325b863c963221245bb28 2023-07-24 23:10:26,374 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a5773ebe0b9e5a7db3b6a0ead423c767 empty. 2023-07-24 23:10:26,375 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/030d5a884d950c2557a29b8f5d09b67d 2023-07-24 23:10:26,378 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5169c491999325b863c963221245bb28 empty. 2023-07-24 23:10:26,379 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1342cdaaaaef321fc1c8c4dca7995875 2023-07-24 23:10:26,379 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1d45a257eaaa0c8fececfe733cfc6944 2023-07-24 23:10:26,379 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/030d5a884d950c2557a29b8f5d09b67d empty. 2023-07-24 23:10:26,379 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1342cdaaaaef321fc1c8c4dca7995875 empty. 2023-07-24 23:10:26,380 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/030d5a884d950c2557a29b8f5d09b67d 2023-07-24 23:10:26,380 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a5773ebe0b9e5a7db3b6a0ead423c767 2023-07-24 23:10:26,380 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1342cdaaaaef321fc1c8c4dca7995875 2023-07-24 23:10:26,381 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1d45a257eaaa0c8fececfe733cfc6944 empty. 2023-07-24 23:10:26,381 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5169c491999325b863c963221245bb28 2023-07-24 23:10:26,381 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1d45a257eaaa0c8fececfe733cfc6944 2023-07-24 23:10:26,381 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-24 23:10:26,435 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-24 23:10:26,439 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => a5773ebe0b9e5a7db3b6a0ead423c767, NAME => 'Group_testTableMoveTruncateAndDrop,,1690240226217.a5773ebe0b9e5a7db3b6a0ead423c767.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp 2023-07-24 23:10:26,452 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 5169c491999325b863c963221245bb28, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690240226217.5169c491999325b863c963221245bb28.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp 2023-07-24 23:10:26,455 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 030d5a884d950c2557a29b8f5d09b67d, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690240226217.030d5a884d950c2557a29b8f5d09b67d.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp 2023-07-24 23:10:26,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-24 23:10:26,555 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1690240226217.a5773ebe0b9e5a7db3b6a0ead423c767.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:26,557 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing a5773ebe0b9e5a7db3b6a0ead423c767, disabling compactions & flushes 2023-07-24 23:10:26,557 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1690240226217.a5773ebe0b9e5a7db3b6a0ead423c767. 2023-07-24 23:10:26,557 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1690240226217.a5773ebe0b9e5a7db3b6a0ead423c767. 2023-07-24 23:10:26,557 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1690240226217.a5773ebe0b9e5a7db3b6a0ead423c767. after waiting 0 ms 2023-07-24 23:10:26,557 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1690240226217.a5773ebe0b9e5a7db3b6a0ead423c767. 2023-07-24 23:10:26,557 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1690240226217.a5773ebe0b9e5a7db3b6a0ead423c767. 2023-07-24 23:10:26,557 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for a5773ebe0b9e5a7db3b6a0ead423c767: 2023-07-24 23:10:26,559 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 1d45a257eaaa0c8fececfe733cfc6944, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690240226217.1d45a257eaaa0c8fececfe733cfc6944.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp 2023-07-24 23:10:26,559 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1690240226217.5169c491999325b863c963221245bb28.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:26,559 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 5169c491999325b863c963221245bb28, disabling compactions & flushes 2023-07-24 23:10:26,559 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1690240226217.5169c491999325b863c963221245bb28. 2023-07-24 23:10:26,559 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690240226217.5169c491999325b863c963221245bb28. 2023-07-24 23:10:26,559 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690240226217.5169c491999325b863c963221245bb28. after waiting 0 ms 2023-07-24 23:10:26,559 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1690240226217.5169c491999325b863c963221245bb28. 2023-07-24 23:10:26,559 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1690240226217.5169c491999325b863c963221245bb28. 2023-07-24 23:10:26,559 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 5169c491999325b863c963221245bb28: 2023-07-24 23:10:26,560 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 1342cdaaaaef321fc1c8c4dca7995875, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690240226217.1342cdaaaaef321fc1c8c4dca7995875.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp 2023-07-24 23:10:26,588 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690240226217.030d5a884d950c2557a29b8f5d09b67d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:26,590 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 030d5a884d950c2557a29b8f5d09b67d, disabling compactions & flushes 2023-07-24 23:10:26,590 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690240226217.030d5a884d950c2557a29b8f5d09b67d. 2023-07-24 23:10:26,590 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690240226217.030d5a884d950c2557a29b8f5d09b67d. 2023-07-24 23:10:26,590 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690240226217.030d5a884d950c2557a29b8f5d09b67d. after waiting 0 ms 2023-07-24 23:10:26,590 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690240226217.030d5a884d950c2557a29b8f5d09b67d. 2023-07-24 23:10:26,590 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690240226217.030d5a884d950c2557a29b8f5d09b67d. 2023-07-24 23:10:26,590 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 030d5a884d950c2557a29b8f5d09b67d: 2023-07-24 23:10:26,628 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690240226217.1d45a257eaaa0c8fececfe733cfc6944.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:26,629 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 1d45a257eaaa0c8fececfe733cfc6944, disabling compactions & flushes 2023-07-24 23:10:26,629 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690240226217.1d45a257eaaa0c8fececfe733cfc6944. 2023-07-24 23:10:26,629 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690240226217.1d45a257eaaa0c8fececfe733cfc6944. 2023-07-24 23:10:26,629 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690240226217.1d45a257eaaa0c8fececfe733cfc6944. after waiting 0 ms 2023-07-24 23:10:26,629 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690240226217.1d45a257eaaa0c8fececfe733cfc6944. 2023-07-24 23:10:26,629 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690240226217.1d45a257eaaa0c8fececfe733cfc6944. 2023-07-24 23:10:26,629 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 1d45a257eaaa0c8fececfe733cfc6944: 2023-07-24 23:10:26,857 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-24 23:10:27,027 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1690240226217.1342cdaaaaef321fc1c8c4dca7995875.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:27,027 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 1342cdaaaaef321fc1c8c4dca7995875, disabling compactions & flushes 2023-07-24 23:10:27,027 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1690240226217.1342cdaaaaef321fc1c8c4dca7995875. 2023-07-24 23:10:27,027 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690240226217.1342cdaaaaef321fc1c8c4dca7995875. 2023-07-24 23:10:27,027 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690240226217.1342cdaaaaef321fc1c8c4dca7995875. after waiting 0 ms 2023-07-24 23:10:27,027 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1690240226217.1342cdaaaaef321fc1c8c4dca7995875. 2023-07-24 23:10:27,027 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1690240226217.1342cdaaaaef321fc1c8c4dca7995875. 2023-07-24 23:10:27,027 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 1342cdaaaaef321fc1c8c4dca7995875: 2023-07-24 23:10:27,036 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 23:10:27,037 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1690240226217.a5773ebe0b9e5a7db3b6a0ead423c767.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690240227037"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240227037"}]},"ts":"1690240227037"} 2023-07-24 23:10:27,038 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690240226217.5169c491999325b863c963221245bb28.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690240227037"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240227037"}]},"ts":"1690240227037"} 2023-07-24 23:10:27,038 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690240226217.030d5a884d950c2557a29b8f5d09b67d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690240227037"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240227037"}]},"ts":"1690240227037"} 2023-07-24 23:10:27,038 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690240226217.1d45a257eaaa0c8fececfe733cfc6944.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690240227037"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240227037"}]},"ts":"1690240227037"} 2023-07-24 23:10:27,038 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690240226217.1342cdaaaaef321fc1c8c4dca7995875.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690240227037"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240227037"}]},"ts":"1690240227037"} 2023-07-24 23:10:27,095 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-24 23:10:27,097 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 23:10:27,097 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690240227097"}]},"ts":"1690240227097"} 2023-07-24 23:10:27,099 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-24 23:10:27,109 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 23:10:27,109 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 23:10:27,109 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 23:10:27,109 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 23:10:27,110 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a5773ebe0b9e5a7db3b6a0ead423c767, ASSIGN}, {pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5169c491999325b863c963221245bb28, ASSIGN}, {pid=18, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=030d5a884d950c2557a29b8f5d09b67d, ASSIGN}, {pid=19, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1d45a257eaaa0c8fececfe733cfc6944, ASSIGN}, {pid=20, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1342cdaaaaef321fc1c8c4dca7995875, ASSIGN}] 2023-07-24 23:10:27,116 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a5773ebe0b9e5a7db3b6a0ead423c767, ASSIGN 2023-07-24 23:10:27,116 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5169c491999325b863c963221245bb28, ASSIGN 2023-07-24 23:10:27,117 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=19, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1d45a257eaaa0c8fececfe733cfc6944, ASSIGN 2023-07-24 23:10:27,117 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=18, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=030d5a884d950c2557a29b8f5d09b67d, ASSIGN 2023-07-24 23:10:27,119 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a5773ebe0b9e5a7db3b6a0ead423c767, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42429,1690240220974; forceNewPlan=false, retain=false 2023-07-24 23:10:27,119 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=19, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1d45a257eaaa0c8fececfe733cfc6944, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46215,1690240224735; forceNewPlan=false, retain=false 2023-07-24 23:10:27,119 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=20, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1342cdaaaaef321fc1c8c4dca7995875, ASSIGN 2023-07-24 23:10:27,119 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=18, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=030d5a884d950c2557a29b8f5d09b67d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42429,1690240220974; forceNewPlan=false, retain=false 2023-07-24 23:10:27,119 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5169c491999325b863c963221245bb28, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46215,1690240224735; forceNewPlan=false, retain=false 2023-07-24 23:10:27,121 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=20, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1342cdaaaaef321fc1c8c4dca7995875, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42429,1690240220974; forceNewPlan=false, retain=false 2023-07-24 23:10:27,270 INFO [jenkins-hbase4:42959] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-24 23:10:27,273 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=1d45a257eaaa0c8fececfe733cfc6944, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46215,1690240224735 2023-07-24 23:10:27,274 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=18 updating hbase:meta row=030d5a884d950c2557a29b8f5d09b67d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42429,1690240220974 2023-07-24 23:10:27,274 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690240226217.1d45a257eaaa0c8fececfe733cfc6944.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690240227273"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240227273"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240227273"}]},"ts":"1690240227273"} 2023-07-24 23:10:27,274 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=5169c491999325b863c963221245bb28, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46215,1690240224735 2023-07-24 23:10:27,273 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=1342cdaaaaef321fc1c8c4dca7995875, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42429,1690240220974 2023-07-24 23:10:27,275 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690240226217.1342cdaaaaef321fc1c8c4dca7995875.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690240227273"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240227273"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240227273"}]},"ts":"1690240227273"} 2023-07-24 23:10:27,275 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690240226217.5169c491999325b863c963221245bb28.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690240227273"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240227273"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240227273"}]},"ts":"1690240227273"} 2023-07-24 23:10:27,274 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690240226217.030d5a884d950c2557a29b8f5d09b67d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690240227273"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240227273"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240227273"}]},"ts":"1690240227273"} 2023-07-24 23:10:27,277 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=19, state=RUNNABLE; OpenRegionProcedure 1d45a257eaaa0c8fececfe733cfc6944, server=jenkins-hbase4.apache.org,46215,1690240224735}] 2023-07-24 23:10:27,279 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=20, state=RUNNABLE; OpenRegionProcedure 1342cdaaaaef321fc1c8c4dca7995875, server=jenkins-hbase4.apache.org,42429,1690240220974}] 2023-07-24 23:10:27,282 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=23, ppid=17, state=RUNNABLE; OpenRegionProcedure 5169c491999325b863c963221245bb28, server=jenkins-hbase4.apache.org,46215,1690240224735}] 2023-07-24 23:10:27,283 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=24, ppid=18, state=RUNNABLE; OpenRegionProcedure 030d5a884d950c2557a29b8f5d09b67d, server=jenkins-hbase4.apache.org,42429,1690240220974}] 2023-07-24 23:10:27,274 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=a5773ebe0b9e5a7db3b6a0ead423c767, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42429,1690240220974 2023-07-24 23:10:27,285 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690240226217.a5773ebe0b9e5a7db3b6a0ead423c767.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690240227274"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240227274"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240227274"}]},"ts":"1690240227274"} 2023-07-24 23:10:27,297 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=25, ppid=16, state=RUNNABLE; OpenRegionProcedure a5773ebe0b9e5a7db3b6a0ead423c767, server=jenkins-hbase4.apache.org,42429,1690240220974}] 2023-07-24 23:10:27,359 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-24 23:10:27,447 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690240226217.1d45a257eaaa0c8fececfe733cfc6944. 2023-07-24 23:10:27,447 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690240226217.030d5a884d950c2557a29b8f5d09b67d. 2023-07-24 23:10:27,449 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1d45a257eaaa0c8fececfe733cfc6944, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690240226217.1d45a257eaaa0c8fececfe733cfc6944.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-24 23:10:27,449 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 030d5a884d950c2557a29b8f5d09b67d, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690240226217.030d5a884d950c2557a29b8f5d09b67d.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-24 23:10:27,449 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 1d45a257eaaa0c8fececfe733cfc6944 2023-07-24 23:10:27,449 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690240226217.1d45a257eaaa0c8fececfe733cfc6944.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:27,450 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1d45a257eaaa0c8fececfe733cfc6944 2023-07-24 23:10:27,450 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 030d5a884d950c2557a29b8f5d09b67d 2023-07-24 23:10:27,450 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1d45a257eaaa0c8fececfe733cfc6944 2023-07-24 23:10:27,450 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690240226217.030d5a884d950c2557a29b8f5d09b67d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:27,450 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 030d5a884d950c2557a29b8f5d09b67d 2023-07-24 23:10:27,450 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 030d5a884d950c2557a29b8f5d09b67d 2023-07-24 23:10:27,454 INFO [StoreOpener-030d5a884d950c2557a29b8f5d09b67d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 030d5a884d950c2557a29b8f5d09b67d 2023-07-24 23:10:27,455 INFO [StoreOpener-1d45a257eaaa0c8fececfe733cfc6944-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 1d45a257eaaa0c8fececfe733cfc6944 2023-07-24 23:10:27,456 DEBUG [StoreOpener-030d5a884d950c2557a29b8f5d09b67d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/030d5a884d950c2557a29b8f5d09b67d/f 2023-07-24 23:10:27,456 DEBUG [StoreOpener-030d5a884d950c2557a29b8f5d09b67d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/030d5a884d950c2557a29b8f5d09b67d/f 2023-07-24 23:10:27,457 DEBUG [StoreOpener-1d45a257eaaa0c8fececfe733cfc6944-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/1d45a257eaaa0c8fececfe733cfc6944/f 2023-07-24 23:10:27,457 DEBUG [StoreOpener-1d45a257eaaa0c8fececfe733cfc6944-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/1d45a257eaaa0c8fececfe733cfc6944/f 2023-07-24 23:10:27,457 INFO [StoreOpener-030d5a884d950c2557a29b8f5d09b67d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 030d5a884d950c2557a29b8f5d09b67d columnFamilyName f 2023-07-24 23:10:27,458 INFO [StoreOpener-1d45a257eaaa0c8fececfe733cfc6944-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1d45a257eaaa0c8fececfe733cfc6944 columnFamilyName f 2023-07-24 23:10:27,458 INFO [StoreOpener-030d5a884d950c2557a29b8f5d09b67d-1] regionserver.HStore(310): Store=030d5a884d950c2557a29b8f5d09b67d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:27,460 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/030d5a884d950c2557a29b8f5d09b67d 2023-07-24 23:10:27,461 INFO [StoreOpener-1d45a257eaaa0c8fececfe733cfc6944-1] regionserver.HStore(310): Store=1d45a257eaaa0c8fececfe733cfc6944/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:27,462 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/030d5a884d950c2557a29b8f5d09b67d 2023-07-24 23:10:27,463 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/1d45a257eaaa0c8fececfe733cfc6944 2023-07-24 23:10:27,464 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/1d45a257eaaa0c8fececfe733cfc6944 2023-07-24 23:10:27,473 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 030d5a884d950c2557a29b8f5d09b67d 2023-07-24 23:10:27,474 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1d45a257eaaa0c8fececfe733cfc6944 2023-07-24 23:10:27,477 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/030d5a884d950c2557a29b8f5d09b67d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 23:10:27,479 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 030d5a884d950c2557a29b8f5d09b67d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10849726720, jitterRate=0.010459542274475098}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 23:10:27,479 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 030d5a884d950c2557a29b8f5d09b67d: 2023-07-24 23:10:27,480 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690240226217.030d5a884d950c2557a29b8f5d09b67d., pid=24, masterSystemTime=1690240227434 2023-07-24 23:10:27,484 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690240226217.030d5a884d950c2557a29b8f5d09b67d. 2023-07-24 23:10:27,484 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690240226217.030d5a884d950c2557a29b8f5d09b67d. 2023-07-24 23:10:27,485 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=18 updating hbase:meta row=030d5a884d950c2557a29b8f5d09b67d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42429,1690240220974 2023-07-24 23:10:27,485 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690240226217.030d5a884d950c2557a29b8f5d09b67d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690240227485"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690240227485"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690240227485"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690240227485"}]},"ts":"1690240227485"} 2023-07-24 23:10:27,485 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1690240226217.a5773ebe0b9e5a7db3b6a0ead423c767. 2023-07-24 23:10:27,486 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a5773ebe0b9e5a7db3b6a0ead423c767, NAME => 'Group_testTableMoveTruncateAndDrop,,1690240226217.a5773ebe0b9e5a7db3b6a0ead423c767.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-24 23:10:27,489 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop a5773ebe0b9e5a7db3b6a0ead423c767 2023-07-24 23:10:27,489 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1690240226217.a5773ebe0b9e5a7db3b6a0ead423c767.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:27,489 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a5773ebe0b9e5a7db3b6a0ead423c767 2023-07-24 23:10:27,489 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a5773ebe0b9e5a7db3b6a0ead423c767 2023-07-24 23:10:27,490 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/1d45a257eaaa0c8fececfe733cfc6944/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 23:10:27,493 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1d45a257eaaa0c8fececfe733cfc6944; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9690155360, jitterRate=-0.09753395617008209}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 23:10:27,493 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1d45a257eaaa0c8fececfe733cfc6944: 2023-07-24 23:10:27,494 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690240226217.1d45a257eaaa0c8fececfe733cfc6944., pid=21, masterSystemTime=1690240227433 2023-07-24 23:10:27,495 INFO [StoreOpener-a5773ebe0b9e5a7db3b6a0ead423c767-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region a5773ebe0b9e5a7db3b6a0ead423c767 2023-07-24 23:10:27,498 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690240226217.1d45a257eaaa0c8fececfe733cfc6944. 2023-07-24 23:10:27,498 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690240226217.1d45a257eaaa0c8fececfe733cfc6944. 2023-07-24 23:10:27,499 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1690240226217.5169c491999325b863c963221245bb28. 2023-07-24 23:10:27,499 DEBUG [StoreOpener-a5773ebe0b9e5a7db3b6a0ead423c767-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/a5773ebe0b9e5a7db3b6a0ead423c767/f 2023-07-24 23:10:27,499 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5169c491999325b863c963221245bb28, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690240226217.5169c491999325b863c963221245bb28.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-24 23:10:27,499 DEBUG [StoreOpener-a5773ebe0b9e5a7db3b6a0ead423c767-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/a5773ebe0b9e5a7db3b6a0ead423c767/f 2023-07-24 23:10:27,499 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=24, resume processing ppid=18 2023-07-24 23:10:27,499 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 5169c491999325b863c963221245bb28 2023-07-24 23:10:27,499 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=24, ppid=18, state=SUCCESS; OpenRegionProcedure 030d5a884d950c2557a29b8f5d09b67d, server=jenkins-hbase4.apache.org,42429,1690240220974 in 205 msec 2023-07-24 23:10:27,499 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1690240226217.5169c491999325b863c963221245bb28.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:27,499 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=1d45a257eaaa0c8fececfe733cfc6944, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46215,1690240224735 2023-07-24 23:10:27,500 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5169c491999325b863c963221245bb28 2023-07-24 23:10:27,500 INFO [StoreOpener-a5773ebe0b9e5a7db3b6a0ead423c767-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a5773ebe0b9e5a7db3b6a0ead423c767 columnFamilyName f 2023-07-24 23:10:27,500 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5169c491999325b863c963221245bb28 2023-07-24 23:10:27,500 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690240226217.1d45a257eaaa0c8fececfe733cfc6944.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690240227499"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690240227499"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690240227499"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690240227499"}]},"ts":"1690240227499"} 2023-07-24 23:10:27,504 INFO [StoreOpener-5169c491999325b863c963221245bb28-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 5169c491999325b863c963221245bb28 2023-07-24 23:10:27,504 INFO [StoreOpener-a5773ebe0b9e5a7db3b6a0ead423c767-1] regionserver.HStore(310): Store=a5773ebe0b9e5a7db3b6a0ead423c767/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:27,507 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/a5773ebe0b9e5a7db3b6a0ead423c767 2023-07-24 23:10:27,508 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=030d5a884d950c2557a29b8f5d09b67d, ASSIGN in 390 msec 2023-07-24 23:10:27,508 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/a5773ebe0b9e5a7db3b6a0ead423c767 2023-07-24 23:10:27,509 DEBUG [StoreOpener-5169c491999325b863c963221245bb28-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/5169c491999325b863c963221245bb28/f 2023-07-24 23:10:27,510 DEBUG [StoreOpener-5169c491999325b863c963221245bb28-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/5169c491999325b863c963221245bb28/f 2023-07-24 23:10:27,511 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=19 2023-07-24 23:10:27,511 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=19, state=SUCCESS; OpenRegionProcedure 1d45a257eaaa0c8fececfe733cfc6944, server=jenkins-hbase4.apache.org,46215,1690240224735 in 229 msec 2023-07-24 23:10:27,511 INFO [StoreOpener-5169c491999325b863c963221245bb28-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5169c491999325b863c963221245bb28 columnFamilyName f 2023-07-24 23:10:27,512 INFO [StoreOpener-5169c491999325b863c963221245bb28-1] regionserver.HStore(310): Store=5169c491999325b863c963221245bb28/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:27,514 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/5169c491999325b863c963221245bb28 2023-07-24 23:10:27,516 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1d45a257eaaa0c8fececfe733cfc6944, ASSIGN in 401 msec 2023-07-24 23:10:27,518 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/5169c491999325b863c963221245bb28 2023-07-24 23:10:27,519 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a5773ebe0b9e5a7db3b6a0ead423c767 2023-07-24 23:10:27,522 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/a5773ebe0b9e5a7db3b6a0ead423c767/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 23:10:27,523 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5169c491999325b863c963221245bb28 2023-07-24 23:10:27,523 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a5773ebe0b9e5a7db3b6a0ead423c767; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11814261600, jitterRate=0.10028885304927826}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 23:10:27,523 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a5773ebe0b9e5a7db3b6a0ead423c767: 2023-07-24 23:10:27,525 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1690240226217.a5773ebe0b9e5a7db3b6a0ead423c767., pid=25, masterSystemTime=1690240227434 2023-07-24 23:10:27,529 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1690240226217.a5773ebe0b9e5a7db3b6a0ead423c767. 2023-07-24 23:10:27,529 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1690240226217.a5773ebe0b9e5a7db3b6a0ead423c767. 2023-07-24 23:10:27,529 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1690240226217.1342cdaaaaef321fc1c8c4dca7995875. 2023-07-24 23:10:27,529 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1342cdaaaaef321fc1c8c4dca7995875, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690240226217.1342cdaaaaef321fc1c8c4dca7995875.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-24 23:10:27,529 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 1342cdaaaaef321fc1c8c4dca7995875 2023-07-24 23:10:27,529 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1690240226217.1342cdaaaaef321fc1c8c4dca7995875.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:27,529 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1342cdaaaaef321fc1c8c4dca7995875 2023-07-24 23:10:27,529 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1342cdaaaaef321fc1c8c4dca7995875 2023-07-24 23:10:27,531 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=a5773ebe0b9e5a7db3b6a0ead423c767, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42429,1690240220974 2023-07-24 23:10:27,532 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1690240226217.a5773ebe0b9e5a7db3b6a0ead423c767.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690240227531"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690240227531"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690240227531"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690240227531"}]},"ts":"1690240227531"} 2023-07-24 23:10:27,532 INFO [StoreOpener-1342cdaaaaef321fc1c8c4dca7995875-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 1342cdaaaaef321fc1c8c4dca7995875 2023-07-24 23:10:27,536 DEBUG [StoreOpener-1342cdaaaaef321fc1c8c4dca7995875-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/1342cdaaaaef321fc1c8c4dca7995875/f 2023-07-24 23:10:27,536 DEBUG [StoreOpener-1342cdaaaaef321fc1c8c4dca7995875-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/1342cdaaaaef321fc1c8c4dca7995875/f 2023-07-24 23:10:27,537 INFO [StoreOpener-1342cdaaaaef321fc1c8c4dca7995875-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1342cdaaaaef321fc1c8c4dca7995875 columnFamilyName f 2023-07-24 23:10:27,541 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/5169c491999325b863c963221245bb28/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 23:10:27,543 INFO [StoreOpener-1342cdaaaaef321fc1c8c4dca7995875-1] regionserver.HStore(310): Store=1342cdaaaaef321fc1c8c4dca7995875/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:27,545 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5169c491999325b863c963221245bb28; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11977345120, jitterRate=0.11547718942165375}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 23:10:27,545 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5169c491999325b863c963221245bb28: 2023-07-24 23:10:27,545 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=25, resume processing ppid=16 2023-07-24 23:10:27,545 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/1342cdaaaaef321fc1c8c4dca7995875 2023-07-24 23:10:27,545 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=25, ppid=16, state=SUCCESS; OpenRegionProcedure a5773ebe0b9e5a7db3b6a0ead423c767, server=jenkins-hbase4.apache.org,42429,1690240220974 in 237 msec 2023-07-24 23:10:27,546 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1690240226217.5169c491999325b863c963221245bb28., pid=23, masterSystemTime=1690240227433 2023-07-24 23:10:27,546 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/1342cdaaaaef321fc1c8c4dca7995875 2023-07-24 23:10:27,548 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a5773ebe0b9e5a7db3b6a0ead423c767, ASSIGN in 435 msec 2023-07-24 23:10:27,548 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1690240226217.5169c491999325b863c963221245bb28. 2023-07-24 23:10:27,548 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1690240226217.5169c491999325b863c963221245bb28. 2023-07-24 23:10:27,551 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1342cdaaaaef321fc1c8c4dca7995875 2023-07-24 23:10:27,555 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/1342cdaaaaef321fc1c8c4dca7995875/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 23:10:27,555 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1342cdaaaaef321fc1c8c4dca7995875; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10281549760, jitterRate=-0.04245606064796448}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 23:10:27,556 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1342cdaaaaef321fc1c8c4dca7995875: 2023-07-24 23:10:27,557 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1690240226217.1342cdaaaaef321fc1c8c4dca7995875., pid=22, masterSystemTime=1690240227434 2023-07-24 23:10:27,557 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=5169c491999325b863c963221245bb28, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46215,1690240224735 2023-07-24 23:10:27,558 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690240226217.5169c491999325b863c963221245bb28.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690240227557"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690240227557"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690240227557"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690240227557"}]},"ts":"1690240227557"} 2023-07-24 23:10:27,565 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=1342cdaaaaef321fc1c8c4dca7995875, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42429,1690240220974 2023-07-24 23:10:27,565 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=23, resume processing ppid=17 2023-07-24 23:10:27,567 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=23, ppid=17, state=SUCCESS; OpenRegionProcedure 5169c491999325b863c963221245bb28, server=jenkins-hbase4.apache.org,46215,1690240224735 in 278 msec 2023-07-24 23:10:27,567 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690240226217.1342cdaaaaef321fc1c8c4dca7995875.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690240227565"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690240227565"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690240227565"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690240227565"}]},"ts":"1690240227565"} 2023-07-24 23:10:27,569 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5169c491999325b863c963221245bb28, ASSIGN in 457 msec 2023-07-24 23:10:27,571 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1690240226217.1342cdaaaaef321fc1c8c4dca7995875. 2023-07-24 23:10:27,572 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1690240226217.1342cdaaaaef321fc1c8c4dca7995875. 2023-07-24 23:10:27,573 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=20 2023-07-24 23:10:27,574 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=20, state=SUCCESS; OpenRegionProcedure 1342cdaaaaef321fc1c8c4dca7995875, server=jenkins-hbase4.apache.org,42429,1690240220974 in 290 msec 2023-07-24 23:10:27,579 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=20, resume processing ppid=15 2023-07-24 23:10:27,580 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=20, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1342cdaaaaef321fc1c8c4dca7995875, ASSIGN in 464 msec 2023-07-24 23:10:27,581 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 23:10:27,581 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690240227581"}]},"ts":"1690240227581"} 2023-07-24 23:10:27,584 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-24 23:10:27,588 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 23:10:27,597 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=15, state=SUCCESS; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop in 1.3670 sec 2023-07-24 23:10:28,360 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-24 23:10:28,361 INFO [Listener at localhost/39785] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 15 completed 2023-07-24 23:10:28,361 DEBUG [Listener at localhost/39785] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testTableMoveTruncateAndDrop get assigned. Timeout = 60000ms 2023-07-24 23:10:28,362 INFO [Listener at localhost/39785] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:28,363 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=33649] ipc.CallRunner(144): callId: 51 service: ClientService methodName: Scan size: 95 connection: 172.31.14.131:48084 deadline: 1690240288363, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=46215 startCode=1690240224735. As of locationSeqNum=14. 2023-07-24 23:10:28,472 DEBUG [hconnection-0x45c34053-shared-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 23:10:28,478 INFO [RS-EventLoopGroup-7-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39986, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 23:10:28,520 INFO [Listener at localhost/39785] hbase.HBaseTestingUtility(3484): All regions for table Group_testTableMoveTruncateAndDrop assigned to meta. Checking AM states. 2023-07-24 23:10:28,521 INFO [Listener at localhost/39785] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:28,521 INFO [Listener at localhost/39785] hbase.HBaseTestingUtility(3504): All regions for table Group_testTableMoveTruncateAndDrop assigned. 2023-07-24 23:10:28,522 INFO [Listener at localhost/39785] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:28,528 DEBUG [Listener at localhost/39785] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 23:10:28,547 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48100, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 23:10:28,549 DEBUG [Listener at localhost/39785] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 23:10:28,556 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49454, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 23:10:28,557 DEBUG [Listener at localhost/39785] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 23:10:28,559 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54544, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 23:10:28,560 DEBUG [Listener at localhost/39785] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 23:10:28,562 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39996, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 23:10:28,575 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-24 23:10:28,575 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 23:10:28,575 INFO [Listener at localhost/39785] rsgroup.TestRSGroupsAdmin1(307): Moving table Group_testTableMoveTruncateAndDrop to Group_testTableMoveTruncateAndDrop_1730139615 2023-07-24 23:10:28,584 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testTableMoveTruncateAndDrop] to rsgroup Group_testTableMoveTruncateAndDrop_1730139615 2023-07-24 23:10:28,588 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:28,588 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:28,589 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1730139615 2023-07-24 23:10:28,589 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 23:10:28,594 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testTableMoveTruncateAndDrop to RSGroup Group_testTableMoveTruncateAndDrop_1730139615 2023-07-24 23:10:28,594 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] rsgroup.RSGroupAdminServer(345): Moving region a5773ebe0b9e5a7db3b6a0ead423c767 to RSGroup Group_testTableMoveTruncateAndDrop_1730139615 2023-07-24 23:10:28,594 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 23:10:28,595 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 23:10:28,595 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 23:10:28,595 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 23:10:28,595 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 23:10:28,596 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] procedure2.ProcedureExecutor(1029): Stored pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a5773ebe0b9e5a7db3b6a0ead423c767, REOPEN/MOVE 2023-07-24 23:10:28,596 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] rsgroup.RSGroupAdminServer(345): Moving region 5169c491999325b863c963221245bb28 to RSGroup Group_testTableMoveTruncateAndDrop_1730139615 2023-07-24 23:10:28,597 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a5773ebe0b9e5a7db3b6a0ead423c767, REOPEN/MOVE 2023-07-24 23:10:28,597 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 23:10:28,597 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 23:10:28,597 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 23:10:28,597 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 23:10:28,597 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 23:10:28,598 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=a5773ebe0b9e5a7db3b6a0ead423c767, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42429,1690240220974 2023-07-24 23:10:28,598 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690240226217.a5773ebe0b9e5a7db3b6a0ead423c767.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690240228598"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240228598"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240228598"}]},"ts":"1690240228598"} 2023-07-24 23:10:28,599 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] procedure2.ProcedureExecutor(1029): Stored pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5169c491999325b863c963221245bb28, REOPEN/MOVE 2023-07-24 23:10:28,599 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] rsgroup.RSGroupAdminServer(345): Moving region 030d5a884d950c2557a29b8f5d09b67d to RSGroup Group_testTableMoveTruncateAndDrop_1730139615 2023-07-24 23:10:28,600 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5169c491999325b863c963221245bb28, REOPEN/MOVE 2023-07-24 23:10:28,600 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 23:10:28,600 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 23:10:28,600 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 23:10:28,600 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 23:10:28,600 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 23:10:28,601 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] procedure2.ProcedureExecutor(1029): Stored pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=030d5a884d950c2557a29b8f5d09b67d, REOPEN/MOVE 2023-07-24 23:10:28,601 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=5169c491999325b863c963221245bb28, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46215,1690240224735 2023-07-24 23:10:28,601 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] rsgroup.RSGroupAdminServer(345): Moving region 1d45a257eaaa0c8fececfe733cfc6944 to RSGroup Group_testTableMoveTruncateAndDrop_1730139615 2023-07-24 23:10:28,602 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=030d5a884d950c2557a29b8f5d09b67d, REOPEN/MOVE 2023-07-24 23:10:28,602 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690240226217.5169c491999325b863c963221245bb28.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690240228601"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240228601"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240228601"}]},"ts":"1690240228601"} 2023-07-24 23:10:28,602 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 23:10:28,602 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=29, ppid=26, state=RUNNABLE; CloseRegionProcedure a5773ebe0b9e5a7db3b6a0ead423c767, server=jenkins-hbase4.apache.org,42429,1690240220974}] 2023-07-24 23:10:28,602 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 23:10:28,602 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 23:10:28,602 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 23:10:28,602 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 23:10:28,607 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=030d5a884d950c2557a29b8f5d09b67d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42429,1690240220974 2023-07-24 23:10:28,607 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690240226217.030d5a884d950c2557a29b8f5d09b67d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690240228607"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240228607"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240228607"}]},"ts":"1690240228607"} 2023-07-24 23:10:28,608 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] procedure2.ProcedureExecutor(1029): Stored pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1d45a257eaaa0c8fececfe733cfc6944, REOPEN/MOVE 2023-07-24 23:10:28,608 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] rsgroup.RSGroupAdminServer(345): Moving region 1342cdaaaaef321fc1c8c4dca7995875 to RSGroup Group_testTableMoveTruncateAndDrop_1730139615 2023-07-24 23:10:28,609 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1d45a257eaaa0c8fececfe733cfc6944, REOPEN/MOVE 2023-07-24 23:10:28,610 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=1d45a257eaaa0c8fececfe733cfc6944, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46215,1690240224735 2023-07-24 23:10:28,610 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690240226217.1d45a257eaaa0c8fececfe733cfc6944.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690240228610"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240228610"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240228610"}]},"ts":"1690240228610"} 2023-07-24 23:10:28,611 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 23:10:28,611 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 23:10:28,612 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 23:10:28,612 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 23:10:28,612 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 23:10:28,612 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=32, ppid=27, state=RUNNABLE; CloseRegionProcedure 5169c491999325b863c963221245bb28, server=jenkins-hbase4.apache.org,46215,1690240224735}] 2023-07-24 23:10:28,613 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] procedure2.ProcedureExecutor(1029): Stored pid=31, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1342cdaaaaef321fc1c8c4dca7995875, REOPEN/MOVE 2023-07-24 23:10:28,614 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] rsgroup.RSGroupAdminServer(286): Moving 5 region(s) to group Group_testTableMoveTruncateAndDrop_1730139615, current retry=0 2023-07-24 23:10:28,614 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=33, ppid=28, state=RUNNABLE; CloseRegionProcedure 030d5a884d950c2557a29b8f5d09b67d, server=jenkins-hbase4.apache.org,42429,1690240220974}] 2023-07-24 23:10:28,615 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=31, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1342cdaaaaef321fc1c8c4dca7995875, REOPEN/MOVE 2023-07-24 23:10:28,618 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=34, ppid=30, state=RUNNABLE; CloseRegionProcedure 1d45a257eaaa0c8fececfe733cfc6944, server=jenkins-hbase4.apache.org,46215,1690240224735}] 2023-07-24 23:10:28,622 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=1342cdaaaaef321fc1c8c4dca7995875, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42429,1690240220974 2023-07-24 23:10:28,623 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690240226217.1342cdaaaaef321fc1c8c4dca7995875.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690240228622"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240228622"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240228622"}]},"ts":"1690240228622"} 2023-07-24 23:10:28,631 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=35, ppid=31, state=RUNNABLE; CloseRegionProcedure 1342cdaaaaef321fc1c8c4dca7995875, server=jenkins-hbase4.apache.org,42429,1690240220974}] 2023-07-24 23:10:28,760 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close a5773ebe0b9e5a7db3b6a0ead423c767 2023-07-24 23:10:28,762 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a5773ebe0b9e5a7db3b6a0ead423c767, disabling compactions & flushes 2023-07-24 23:10:28,762 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1690240226217.a5773ebe0b9e5a7db3b6a0ead423c767. 2023-07-24 23:10:28,762 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1690240226217.a5773ebe0b9e5a7db3b6a0ead423c767. 2023-07-24 23:10:28,762 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1690240226217.a5773ebe0b9e5a7db3b6a0ead423c767. after waiting 0 ms 2023-07-24 23:10:28,762 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1690240226217.a5773ebe0b9e5a7db3b6a0ead423c767. 2023-07-24 23:10:28,771 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/a5773ebe0b9e5a7db3b6a0ead423c767/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 23:10:28,773 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1690240226217.a5773ebe0b9e5a7db3b6a0ead423c767. 2023-07-24 23:10:28,773 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a5773ebe0b9e5a7db3b6a0ead423c767: 2023-07-24 23:10:28,773 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding a5773ebe0b9e5a7db3b6a0ead423c767 move to jenkins-hbase4.apache.org,33649,1690240221185 record at close sequenceid=2 2023-07-24 23:10:28,774 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 5169c491999325b863c963221245bb28 2023-07-24 23:10:28,775 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5169c491999325b863c963221245bb28, disabling compactions & flushes 2023-07-24 23:10:28,775 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1690240226217.5169c491999325b863c963221245bb28. 2023-07-24 23:10:28,775 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690240226217.5169c491999325b863c963221245bb28. 2023-07-24 23:10:28,775 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690240226217.5169c491999325b863c963221245bb28. after waiting 0 ms 2023-07-24 23:10:28,775 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1690240226217.5169c491999325b863c963221245bb28. 2023-07-24 23:10:28,778 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed a5773ebe0b9e5a7db3b6a0ead423c767 2023-07-24 23:10:28,778 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 030d5a884d950c2557a29b8f5d09b67d 2023-07-24 23:10:28,781 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=a5773ebe0b9e5a7db3b6a0ead423c767, regionState=CLOSED 2023-07-24 23:10:28,781 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1690240226217.a5773ebe0b9e5a7db3b6a0ead423c767.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690240228781"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240228781"}]},"ts":"1690240228781"} 2023-07-24 23:10:28,783 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 030d5a884d950c2557a29b8f5d09b67d, disabling compactions & flushes 2023-07-24 23:10:28,783 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690240226217.030d5a884d950c2557a29b8f5d09b67d. 2023-07-24 23:10:28,783 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690240226217.030d5a884d950c2557a29b8f5d09b67d. 2023-07-24 23:10:28,783 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690240226217.030d5a884d950c2557a29b8f5d09b67d. after waiting 0 ms 2023-07-24 23:10:28,783 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690240226217.030d5a884d950c2557a29b8f5d09b67d. 2023-07-24 23:10:28,787 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=29, resume processing ppid=26 2023-07-24 23:10:28,787 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=29, ppid=26, state=SUCCESS; CloseRegionProcedure a5773ebe0b9e5a7db3b6a0ead423c767, server=jenkins-hbase4.apache.org,42429,1690240220974 in 181 msec 2023-07-24 23:10:28,788 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a5773ebe0b9e5a7db3b6a0ead423c767, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33649,1690240221185; forceNewPlan=false, retain=false 2023-07-24 23:10:28,792 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/5169c491999325b863c963221245bb28/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 23:10:28,794 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1690240226217.5169c491999325b863c963221245bb28. 2023-07-24 23:10:28,794 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5169c491999325b863c963221245bb28: 2023-07-24 23:10:28,794 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 5169c491999325b863c963221245bb28 move to jenkins-hbase4.apache.org,33649,1690240221185 record at close sequenceid=2 2023-07-24 23:10:28,798 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 5169c491999325b863c963221245bb28 2023-07-24 23:10:28,798 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1d45a257eaaa0c8fececfe733cfc6944 2023-07-24 23:10:28,799 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1d45a257eaaa0c8fececfe733cfc6944, disabling compactions & flushes 2023-07-24 23:10:28,799 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690240226217.1d45a257eaaa0c8fececfe733cfc6944. 2023-07-24 23:10:28,799 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/030d5a884d950c2557a29b8f5d09b67d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 23:10:28,799 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690240226217.1d45a257eaaa0c8fececfe733cfc6944. 2023-07-24 23:10:28,799 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690240226217.1d45a257eaaa0c8fececfe733cfc6944. after waiting 0 ms 2023-07-24 23:10:28,800 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690240226217.1d45a257eaaa0c8fececfe733cfc6944. 2023-07-24 23:10:28,800 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=5169c491999325b863c963221245bb28, regionState=CLOSED 2023-07-24 23:10:28,800 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690240226217.5169c491999325b863c963221245bb28.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690240228800"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240228800"}]},"ts":"1690240228800"} 2023-07-24 23:10:28,801 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690240226217.030d5a884d950c2557a29b8f5d09b67d. 2023-07-24 23:10:28,801 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 030d5a884d950c2557a29b8f5d09b67d: 2023-07-24 23:10:28,801 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 030d5a884d950c2557a29b8f5d09b67d move to jenkins-hbase4.apache.org,36981,1690240220580 record at close sequenceid=2 2023-07-24 23:10:28,805 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 030d5a884d950c2557a29b8f5d09b67d 2023-07-24 23:10:28,805 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1342cdaaaaef321fc1c8c4dca7995875 2023-07-24 23:10:28,805 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1342cdaaaaef321fc1c8c4dca7995875, disabling compactions & flushes 2023-07-24 23:10:28,805 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1690240226217.1342cdaaaaef321fc1c8c4dca7995875. 2023-07-24 23:10:28,805 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690240226217.1342cdaaaaef321fc1c8c4dca7995875. 2023-07-24 23:10:28,806 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690240226217.1342cdaaaaef321fc1c8c4dca7995875. after waiting 0 ms 2023-07-24 23:10:28,806 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1690240226217.1342cdaaaaef321fc1c8c4dca7995875. 2023-07-24 23:10:28,806 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=030d5a884d950c2557a29b8f5d09b67d, regionState=CLOSED 2023-07-24 23:10:28,806 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690240226217.030d5a884d950c2557a29b8f5d09b67d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690240228806"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240228806"}]},"ts":"1690240228806"} 2023-07-24 23:10:28,817 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=32, resume processing ppid=27 2023-07-24 23:10:28,818 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=32, ppid=27, state=SUCCESS; CloseRegionProcedure 5169c491999325b863c963221245bb28, server=jenkins-hbase4.apache.org,46215,1690240224735 in 194 msec 2023-07-24 23:10:28,819 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5169c491999325b863c963221245bb28, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33649,1690240221185; forceNewPlan=false, retain=false 2023-07-24 23:10:28,820 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=33, resume processing ppid=28 2023-07-24 23:10:28,820 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=33, ppid=28, state=SUCCESS; CloseRegionProcedure 030d5a884d950c2557a29b8f5d09b67d, server=jenkins-hbase4.apache.org,42429,1690240220974 in 202 msec 2023-07-24 23:10:28,822 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=030d5a884d950c2557a29b8f5d09b67d, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,36981,1690240220580; forceNewPlan=false, retain=false 2023-07-24 23:10:28,827 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/1d45a257eaaa0c8fececfe733cfc6944/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 23:10:28,830 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690240226217.1d45a257eaaa0c8fececfe733cfc6944. 2023-07-24 23:10:28,830 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1d45a257eaaa0c8fececfe733cfc6944: 2023-07-24 23:10:28,831 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1d45a257eaaa0c8fececfe733cfc6944 move to jenkins-hbase4.apache.org,36981,1690240220580 record at close sequenceid=2 2023-07-24 23:10:28,836 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1d45a257eaaa0c8fececfe733cfc6944 2023-07-24 23:10:28,837 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/1342cdaaaaef321fc1c8c4dca7995875/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 23:10:28,837 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=1d45a257eaaa0c8fececfe733cfc6944, regionState=CLOSED 2023-07-24 23:10:28,837 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690240226217.1d45a257eaaa0c8fececfe733cfc6944.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690240228837"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240228837"}]},"ts":"1690240228837"} 2023-07-24 23:10:28,838 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1690240226217.1342cdaaaaef321fc1c8c4dca7995875. 2023-07-24 23:10:28,838 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1342cdaaaaef321fc1c8c4dca7995875: 2023-07-24 23:10:28,838 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1342cdaaaaef321fc1c8c4dca7995875 move to jenkins-hbase4.apache.org,33649,1690240221185 record at close sequenceid=2 2023-07-24 23:10:28,840 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1342cdaaaaef321fc1c8c4dca7995875 2023-07-24 23:10:28,841 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=1342cdaaaaef321fc1c8c4dca7995875, regionState=CLOSED 2023-07-24 23:10:28,841 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690240226217.1342cdaaaaef321fc1c8c4dca7995875.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690240228841"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240228841"}]},"ts":"1690240228841"} 2023-07-24 23:10:28,843 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=34, resume processing ppid=30 2023-07-24 23:10:28,843 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=34, ppid=30, state=SUCCESS; CloseRegionProcedure 1d45a257eaaa0c8fececfe733cfc6944, server=jenkins-hbase4.apache.org,46215,1690240224735 in 222 msec 2023-07-24 23:10:28,844 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1d45a257eaaa0c8fececfe733cfc6944, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,36981,1690240220580; forceNewPlan=false, retain=false 2023-07-24 23:10:28,845 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=35, resume processing ppid=31 2023-07-24 23:10:28,845 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=35, ppid=31, state=SUCCESS; CloseRegionProcedure 1342cdaaaaef321fc1c8c4dca7995875, server=jenkins-hbase4.apache.org,42429,1690240220974 in 212 msec 2023-07-24 23:10:28,846 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=31, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1342cdaaaaef321fc1c8c4dca7995875, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33649,1690240221185; forceNewPlan=false, retain=false 2023-07-24 23:10:28,939 INFO [jenkins-hbase4:42959] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-24 23:10:28,939 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=1d45a257eaaa0c8fececfe733cfc6944, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36981,1690240220580 2023-07-24 23:10:28,940 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=a5773ebe0b9e5a7db3b6a0ead423c767, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33649,1690240221185 2023-07-24 23:10:28,940 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=5169c491999325b863c963221245bb28, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33649,1690240221185 2023-07-24 23:10:28,940 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=030d5a884d950c2557a29b8f5d09b67d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36981,1690240220580 2023-07-24 23:10:28,940 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690240226217.a5773ebe0b9e5a7db3b6a0ead423c767.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690240228939"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240228939"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240228939"}]},"ts":"1690240228939"} 2023-07-24 23:10:28,940 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690240226217.1d45a257eaaa0c8fececfe733cfc6944.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690240228939"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240228939"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240228939"}]},"ts":"1690240228939"} 2023-07-24 23:10:28,940 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690240226217.030d5a884d950c2557a29b8f5d09b67d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690240228939"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240228939"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240228939"}]},"ts":"1690240228939"} 2023-07-24 23:10:28,940 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690240226217.5169c491999325b863c963221245bb28.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690240228939"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240228939"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240228939"}]},"ts":"1690240228939"} 2023-07-24 23:10:28,940 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=1342cdaaaaef321fc1c8c4dca7995875, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33649,1690240221185 2023-07-24 23:10:28,941 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690240226217.1342cdaaaaef321fc1c8c4dca7995875.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690240228939"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240228939"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240228939"}]},"ts":"1690240228939"} 2023-07-24 23:10:28,942 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=36, ppid=26, state=RUNNABLE; OpenRegionProcedure a5773ebe0b9e5a7db3b6a0ead423c767, server=jenkins-hbase4.apache.org,33649,1690240221185}] 2023-07-24 23:10:28,952 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=37, ppid=30, state=RUNNABLE; OpenRegionProcedure 1d45a257eaaa0c8fececfe733cfc6944, server=jenkins-hbase4.apache.org,36981,1690240220580}] 2023-07-24 23:10:28,952 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=39, ppid=28, state=RUNNABLE; OpenRegionProcedure 030d5a884d950c2557a29b8f5d09b67d, server=jenkins-hbase4.apache.org,36981,1690240220580}] 2023-07-24 23:10:28,952 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=38, ppid=27, state=RUNNABLE; OpenRegionProcedure 5169c491999325b863c963221245bb28, server=jenkins-hbase4.apache.org,33649,1690240221185}] 2023-07-24 23:10:28,953 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=40, ppid=31, state=RUNNABLE; OpenRegionProcedure 1342cdaaaaef321fc1c8c4dca7995875, server=jenkins-hbase4.apache.org,33649,1690240221185}] 2023-07-24 23:10:29,018 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-24 23:10:29,093 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-24 23:10:29,093 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-24 23:10:29,094 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 23:10:29,094 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-24 23:10:29,094 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-24 23:10:29,094 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-24 23:10:29,095 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-24 23:10:29,096 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-24 23:10:29,114 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,36981,1690240220580 2023-07-24 23:10:29,114 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 23:10:29,116 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49466, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 23:10:29,121 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1690240226217.1342cdaaaaef321fc1c8c4dca7995875. 2023-07-24 23:10:29,121 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1342cdaaaaef321fc1c8c4dca7995875, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690240226217.1342cdaaaaef321fc1c8c4dca7995875.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-24 23:10:29,121 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690240226217.1d45a257eaaa0c8fececfe733cfc6944. 2023-07-24 23:10:29,122 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1d45a257eaaa0c8fececfe733cfc6944, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690240226217.1d45a257eaaa0c8fececfe733cfc6944.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-24 23:10:29,122 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 1342cdaaaaef321fc1c8c4dca7995875 2023-07-24 23:10:29,122 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 1d45a257eaaa0c8fececfe733cfc6944 2023-07-24 23:10:29,122 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1690240226217.1342cdaaaaef321fc1c8c4dca7995875.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:29,122 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690240226217.1d45a257eaaa0c8fececfe733cfc6944.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:29,122 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1342cdaaaaef321fc1c8c4dca7995875 2023-07-24 23:10:29,122 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1d45a257eaaa0c8fececfe733cfc6944 2023-07-24 23:10:29,122 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1342cdaaaaef321fc1c8c4dca7995875 2023-07-24 23:10:29,122 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1d45a257eaaa0c8fececfe733cfc6944 2023-07-24 23:10:29,127 INFO [StoreOpener-1342cdaaaaef321fc1c8c4dca7995875-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 1342cdaaaaef321fc1c8c4dca7995875 2023-07-24 23:10:29,127 INFO [StoreOpener-1d45a257eaaa0c8fececfe733cfc6944-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 1d45a257eaaa0c8fececfe733cfc6944 2023-07-24 23:10:29,128 DEBUG [StoreOpener-1d45a257eaaa0c8fececfe733cfc6944-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/1d45a257eaaa0c8fececfe733cfc6944/f 2023-07-24 23:10:29,128 DEBUG [StoreOpener-1d45a257eaaa0c8fececfe733cfc6944-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/1d45a257eaaa0c8fececfe733cfc6944/f 2023-07-24 23:10:29,129 DEBUG [StoreOpener-1342cdaaaaef321fc1c8c4dca7995875-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/1342cdaaaaef321fc1c8c4dca7995875/f 2023-07-24 23:10:29,129 DEBUG [StoreOpener-1342cdaaaaef321fc1c8c4dca7995875-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/1342cdaaaaef321fc1c8c4dca7995875/f 2023-07-24 23:10:29,130 INFO [StoreOpener-1d45a257eaaa0c8fececfe733cfc6944-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1d45a257eaaa0c8fececfe733cfc6944 columnFamilyName f 2023-07-24 23:10:29,130 INFO [StoreOpener-1342cdaaaaef321fc1c8c4dca7995875-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1342cdaaaaef321fc1c8c4dca7995875 columnFamilyName f 2023-07-24 23:10:29,130 INFO [StoreOpener-1d45a257eaaa0c8fececfe733cfc6944-1] regionserver.HStore(310): Store=1d45a257eaaa0c8fececfe733cfc6944/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:29,130 INFO [StoreOpener-1342cdaaaaef321fc1c8c4dca7995875-1] regionserver.HStore(310): Store=1342cdaaaaef321fc1c8c4dca7995875/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:29,133 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/1d45a257eaaa0c8fececfe733cfc6944 2023-07-24 23:10:29,135 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/1d45a257eaaa0c8fececfe733cfc6944 2023-07-24 23:10:29,136 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/1342cdaaaaef321fc1c8c4dca7995875 2023-07-24 23:10:29,138 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/1342cdaaaaef321fc1c8c4dca7995875 2023-07-24 23:10:29,139 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1d45a257eaaa0c8fececfe733cfc6944 2023-07-24 23:10:29,141 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1d45a257eaaa0c8fececfe733cfc6944; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11366341440, jitterRate=0.05857303738594055}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 23:10:29,141 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1d45a257eaaa0c8fececfe733cfc6944: 2023-07-24 23:10:29,142 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690240226217.1d45a257eaaa0c8fececfe733cfc6944., pid=37, masterSystemTime=1690240229113 2023-07-24 23:10:29,146 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1342cdaaaaef321fc1c8c4dca7995875 2023-07-24 23:10:29,149 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1342cdaaaaef321fc1c8c4dca7995875; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10070179040, jitterRate=-0.06214149296283722}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 23:10:29,149 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1342cdaaaaef321fc1c8c4dca7995875: 2023-07-24 23:10:29,149 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690240226217.1d45a257eaaa0c8fececfe733cfc6944. 2023-07-24 23:10:29,149 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=1d45a257eaaa0c8fececfe733cfc6944, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,36981,1690240220580 2023-07-24 23:10:29,150 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690240226217.1d45a257eaaa0c8fececfe733cfc6944. 2023-07-24 23:10:29,150 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690240226217.1d45a257eaaa0c8fececfe733cfc6944.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690240229149"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690240229149"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690240229149"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690240229149"}]},"ts":"1690240229149"} 2023-07-24 23:10:29,151 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690240226217.030d5a884d950c2557a29b8f5d09b67d. 2023-07-24 23:10:29,151 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 030d5a884d950c2557a29b8f5d09b67d, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690240226217.030d5a884d950c2557a29b8f5d09b67d.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-24 23:10:29,151 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 030d5a884d950c2557a29b8f5d09b67d 2023-07-24 23:10:29,151 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690240226217.030d5a884d950c2557a29b8f5d09b67d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:29,151 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 030d5a884d950c2557a29b8f5d09b67d 2023-07-24 23:10:29,152 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 030d5a884d950c2557a29b8f5d09b67d 2023-07-24 23:10:29,152 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1690240226217.1342cdaaaaef321fc1c8c4dca7995875., pid=40, masterSystemTime=1690240229113 2023-07-24 23:10:29,156 INFO [StoreOpener-030d5a884d950c2557a29b8f5d09b67d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 030d5a884d950c2557a29b8f5d09b67d 2023-07-24 23:10:29,156 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1690240226217.1342cdaaaaef321fc1c8c4dca7995875. 2023-07-24 23:10:29,156 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1690240226217.1342cdaaaaef321fc1c8c4dca7995875. 2023-07-24 23:10:29,156 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1690240226217.a5773ebe0b9e5a7db3b6a0ead423c767. 2023-07-24 23:10:29,156 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a5773ebe0b9e5a7db3b6a0ead423c767, NAME => 'Group_testTableMoveTruncateAndDrop,,1690240226217.a5773ebe0b9e5a7db3b6a0ead423c767.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-24 23:10:29,156 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop a5773ebe0b9e5a7db3b6a0ead423c767 2023-07-24 23:10:29,156 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1690240226217.a5773ebe0b9e5a7db3b6a0ead423c767.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:29,157 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a5773ebe0b9e5a7db3b6a0ead423c767 2023-07-24 23:10:29,157 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a5773ebe0b9e5a7db3b6a0ead423c767 2023-07-24 23:10:29,158 DEBUG [StoreOpener-030d5a884d950c2557a29b8f5d09b67d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/030d5a884d950c2557a29b8f5d09b67d/f 2023-07-24 23:10:29,158 DEBUG [StoreOpener-030d5a884d950c2557a29b8f5d09b67d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/030d5a884d950c2557a29b8f5d09b67d/f 2023-07-24 23:10:29,158 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=1342cdaaaaef321fc1c8c4dca7995875, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33649,1690240221185 2023-07-24 23:10:29,159 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690240226217.1342cdaaaaef321fc1c8c4dca7995875.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690240229157"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690240229157"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690240229157"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690240229157"}]},"ts":"1690240229157"} 2023-07-24 23:10:29,159 INFO [StoreOpener-030d5a884d950c2557a29b8f5d09b67d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 030d5a884d950c2557a29b8f5d09b67d columnFamilyName f 2023-07-24 23:10:29,160 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=37, resume processing ppid=30 2023-07-24 23:10:29,167 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=31, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1342cdaaaaef321fc1c8c4dca7995875, REOPEN/MOVE in 553 msec 2023-07-24 23:10:29,171 INFO [StoreOpener-a5773ebe0b9e5a7db3b6a0ead423c767-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region a5773ebe0b9e5a7db3b6a0ead423c767 2023-07-24 23:10:29,165 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=40, resume processing ppid=31 2023-07-24 23:10:29,171 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=31, state=SUCCESS; OpenRegionProcedure 1342cdaaaaef321fc1c8c4dca7995875, server=jenkins-hbase4.apache.org,33649,1690240221185 in 208 msec 2023-07-24 23:10:29,163 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=30, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1d45a257eaaa0c8fececfe733cfc6944, REOPEN/MOVE in 557 msec 2023-07-24 23:10:29,161 INFO [StoreOpener-030d5a884d950c2557a29b8f5d09b67d-1] regionserver.HStore(310): Store=030d5a884d950c2557a29b8f5d09b67d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:29,167 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=30, state=SUCCESS; OpenRegionProcedure 1d45a257eaaa0c8fececfe733cfc6944, server=jenkins-hbase4.apache.org,36981,1690240220580 in 203 msec 2023-07-24 23:10:29,174 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/030d5a884d950c2557a29b8f5d09b67d 2023-07-24 23:10:29,174 DEBUG [StoreOpener-a5773ebe0b9e5a7db3b6a0ead423c767-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/a5773ebe0b9e5a7db3b6a0ead423c767/f 2023-07-24 23:10:29,174 DEBUG [StoreOpener-a5773ebe0b9e5a7db3b6a0ead423c767-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/a5773ebe0b9e5a7db3b6a0ead423c767/f 2023-07-24 23:10:29,175 INFO [StoreOpener-a5773ebe0b9e5a7db3b6a0ead423c767-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a5773ebe0b9e5a7db3b6a0ead423c767 columnFamilyName f 2023-07-24 23:10:29,176 INFO [StoreOpener-a5773ebe0b9e5a7db3b6a0ead423c767-1] regionserver.HStore(310): Store=a5773ebe0b9e5a7db3b6a0ead423c767/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:29,176 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/030d5a884d950c2557a29b8f5d09b67d 2023-07-24 23:10:29,177 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/a5773ebe0b9e5a7db3b6a0ead423c767 2023-07-24 23:10:29,179 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/a5773ebe0b9e5a7db3b6a0ead423c767 2023-07-24 23:10:29,183 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 030d5a884d950c2557a29b8f5d09b67d 2023-07-24 23:10:29,184 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 030d5a884d950c2557a29b8f5d09b67d; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10922903520, jitterRate=0.017274662852287292}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 23:10:29,184 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 030d5a884d950c2557a29b8f5d09b67d: 2023-07-24 23:10:29,186 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690240226217.030d5a884d950c2557a29b8f5d09b67d., pid=39, masterSystemTime=1690240229113 2023-07-24 23:10:29,186 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a5773ebe0b9e5a7db3b6a0ead423c767 2023-07-24 23:10:29,188 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a5773ebe0b9e5a7db3b6a0ead423c767; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11786850880, jitterRate=0.09773603081703186}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 23:10:29,188 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a5773ebe0b9e5a7db3b6a0ead423c767: 2023-07-24 23:10:29,193 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690240226217.030d5a884d950c2557a29b8f5d09b67d. 2023-07-24 23:10:29,193 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690240226217.030d5a884d950c2557a29b8f5d09b67d. 2023-07-24 23:10:29,193 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1690240226217.a5773ebe0b9e5a7db3b6a0ead423c767., pid=36, masterSystemTime=1690240229113 2023-07-24 23:10:29,194 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=030d5a884d950c2557a29b8f5d09b67d, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,36981,1690240220580 2023-07-24 23:10:29,195 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690240226217.030d5a884d950c2557a29b8f5d09b67d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690240229194"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690240229194"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690240229194"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690240229194"}]},"ts":"1690240229194"} 2023-07-24 23:10:29,196 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1690240226217.a5773ebe0b9e5a7db3b6a0ead423c767. 2023-07-24 23:10:29,196 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1690240226217.a5773ebe0b9e5a7db3b6a0ead423c767. 2023-07-24 23:10:29,196 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1690240226217.5169c491999325b863c963221245bb28. 2023-07-24 23:10:29,196 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5169c491999325b863c963221245bb28, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690240226217.5169c491999325b863c963221245bb28.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-24 23:10:29,197 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=a5773ebe0b9e5a7db3b6a0ead423c767, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33649,1690240221185 2023-07-24 23:10:29,197 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1690240226217.a5773ebe0b9e5a7db3b6a0ead423c767.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690240229196"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690240229196"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690240229196"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690240229196"}]},"ts":"1690240229196"} 2023-07-24 23:10:29,197 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 5169c491999325b863c963221245bb28 2023-07-24 23:10:29,197 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1690240226217.5169c491999325b863c963221245bb28.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:29,197 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5169c491999325b863c963221245bb28 2023-07-24 23:10:29,197 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5169c491999325b863c963221245bb28 2023-07-24 23:10:29,200 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=39, resume processing ppid=28 2023-07-24 23:10:29,200 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=39, ppid=28, state=SUCCESS; OpenRegionProcedure 030d5a884d950c2557a29b8f5d09b67d, server=jenkins-hbase4.apache.org,36981,1690240220580 in 245 msec 2023-07-24 23:10:29,202 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=36, resume processing ppid=26 2023-07-24 23:10:29,202 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=28, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=030d5a884d950c2557a29b8f5d09b67d, REOPEN/MOVE in 600 msec 2023-07-24 23:10:29,202 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=36, ppid=26, state=SUCCESS; OpenRegionProcedure a5773ebe0b9e5a7db3b6a0ead423c767, server=jenkins-hbase4.apache.org,33649,1690240221185 in 257 msec 2023-07-24 23:10:29,203 INFO [StoreOpener-5169c491999325b863c963221245bb28-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 5169c491999325b863c963221245bb28 2023-07-24 23:10:29,204 DEBUG [StoreOpener-5169c491999325b863c963221245bb28-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/5169c491999325b863c963221245bb28/f 2023-07-24 23:10:29,204 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=26, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a5773ebe0b9e5a7db3b6a0ead423c767, REOPEN/MOVE in 607 msec 2023-07-24 23:10:29,204 DEBUG [StoreOpener-5169c491999325b863c963221245bb28-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/5169c491999325b863c963221245bb28/f 2023-07-24 23:10:29,205 INFO [StoreOpener-5169c491999325b863c963221245bb28-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5169c491999325b863c963221245bb28 columnFamilyName f 2023-07-24 23:10:29,205 INFO [StoreOpener-5169c491999325b863c963221245bb28-1] regionserver.HStore(310): Store=5169c491999325b863c963221245bb28/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:29,206 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/5169c491999325b863c963221245bb28 2023-07-24 23:10:29,208 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/5169c491999325b863c963221245bb28 2023-07-24 23:10:29,212 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5169c491999325b863c963221245bb28 2023-07-24 23:10:29,213 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5169c491999325b863c963221245bb28; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9879206720, jitterRate=-0.07992717623710632}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 23:10:29,213 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5169c491999325b863c963221245bb28: 2023-07-24 23:10:29,214 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1690240226217.5169c491999325b863c963221245bb28., pid=38, masterSystemTime=1690240229113 2023-07-24 23:10:29,216 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1690240226217.5169c491999325b863c963221245bb28. 2023-07-24 23:10:29,216 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1690240226217.5169c491999325b863c963221245bb28. 2023-07-24 23:10:29,217 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=5169c491999325b863c963221245bb28, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33649,1690240221185 2023-07-24 23:10:29,217 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690240226217.5169c491999325b863c963221245bb28.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690240229217"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690240229217"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690240229217"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690240229217"}]},"ts":"1690240229217"} 2023-07-24 23:10:29,225 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=38, resume processing ppid=27 2023-07-24 23:10:29,225 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=38, ppid=27, state=SUCCESS; OpenRegionProcedure 5169c491999325b863c963221245bb28, server=jenkins-hbase4.apache.org,33649,1690240221185 in 269 msec 2023-07-24 23:10:29,230 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=27, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5169c491999325b863c963221245bb28, REOPEN/MOVE in 628 msec 2023-07-24 23:10:29,615 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] procedure.ProcedureSyncWait(216): waitFor pid=26 2023-07-24 23:10:29,615 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testTableMoveTruncateAndDrop] moved to target group Group_testTableMoveTruncateAndDrop_1730139615. 2023-07-24 23:10:29,615 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 23:10:29,620 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:29,620 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:29,623 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-24 23:10:29,623 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 23:10:29,624 INFO [Listener at localhost/39785] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:29,630 INFO [Listener at localhost/39785] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-24 23:10:29,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-24 23:10:29,641 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] procedure2.ProcedureExecutor(1029): Stored pid=41, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-24 23:10:29,646 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690240229646"}]},"ts":"1690240229646"} 2023-07-24 23:10:29,648 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=41 2023-07-24 23:10:29,648 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-24 23:10:29,650 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-24 23:10:29,652 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=42, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a5773ebe0b9e5a7db3b6a0ead423c767, UNASSIGN}, {pid=43, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5169c491999325b863c963221245bb28, UNASSIGN}, {pid=44, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=030d5a884d950c2557a29b8f5d09b67d, UNASSIGN}, {pid=45, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1d45a257eaaa0c8fececfe733cfc6944, UNASSIGN}, {pid=46, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1342cdaaaaef321fc1c8c4dca7995875, UNASSIGN}] 2023-07-24 23:10:29,655 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=43, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5169c491999325b863c963221245bb28, UNASSIGN 2023-07-24 23:10:29,655 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=42, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a5773ebe0b9e5a7db3b6a0ead423c767, UNASSIGN 2023-07-24 23:10:29,655 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=44, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=030d5a884d950c2557a29b8f5d09b67d, UNASSIGN 2023-07-24 23:10:29,656 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=45, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1d45a257eaaa0c8fececfe733cfc6944, UNASSIGN 2023-07-24 23:10:29,657 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=46, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1342cdaaaaef321fc1c8c4dca7995875, UNASSIGN 2023-07-24 23:10:29,657 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=5169c491999325b863c963221245bb28, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33649,1690240221185 2023-07-24 23:10:29,657 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=a5773ebe0b9e5a7db3b6a0ead423c767, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33649,1690240221185 2023-07-24 23:10:29,657 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=44 updating hbase:meta row=030d5a884d950c2557a29b8f5d09b67d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36981,1690240220580 2023-07-24 23:10:29,657 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690240226217.5169c491999325b863c963221245bb28.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690240229657"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240229657"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240229657"}]},"ts":"1690240229657"} 2023-07-24 23:10:29,657 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690240226217.030d5a884d950c2557a29b8f5d09b67d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690240229657"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240229657"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240229657"}]},"ts":"1690240229657"} 2023-07-24 23:10:29,657 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690240226217.a5773ebe0b9e5a7db3b6a0ead423c767.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690240229657"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240229657"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240229657"}]},"ts":"1690240229657"} 2023-07-24 23:10:29,658 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=1d45a257eaaa0c8fececfe733cfc6944, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36981,1690240220580 2023-07-24 23:10:29,658 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690240226217.1d45a257eaaa0c8fececfe733cfc6944.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690240229658"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240229658"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240229658"}]},"ts":"1690240229658"} 2023-07-24 23:10:29,659 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=1342cdaaaaef321fc1c8c4dca7995875, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33649,1690240221185 2023-07-24 23:10:29,659 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690240226217.1342cdaaaaef321fc1c8c4dca7995875.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690240229659"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240229659"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240229659"}]},"ts":"1690240229659"} 2023-07-24 23:10:29,660 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=47, ppid=43, state=RUNNABLE; CloseRegionProcedure 5169c491999325b863c963221245bb28, server=jenkins-hbase4.apache.org,33649,1690240221185}] 2023-07-24 23:10:29,661 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=48, ppid=44, state=RUNNABLE; CloseRegionProcedure 030d5a884d950c2557a29b8f5d09b67d, server=jenkins-hbase4.apache.org,36981,1690240220580}] 2023-07-24 23:10:29,663 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=49, ppid=42, state=RUNNABLE; CloseRegionProcedure a5773ebe0b9e5a7db3b6a0ead423c767, server=jenkins-hbase4.apache.org,33649,1690240221185}] 2023-07-24 23:10:29,664 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=50, ppid=45, state=RUNNABLE; CloseRegionProcedure 1d45a257eaaa0c8fececfe733cfc6944, server=jenkins-hbase4.apache.org,36981,1690240220580}] 2023-07-24 23:10:29,666 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=51, ppid=46, state=RUNNABLE; CloseRegionProcedure 1342cdaaaaef321fc1c8c4dca7995875, server=jenkins-hbase4.apache.org,33649,1690240221185}] 2023-07-24 23:10:29,749 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=41 2023-07-24 23:10:29,815 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 030d5a884d950c2557a29b8f5d09b67d 2023-07-24 23:10:29,815 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close a5773ebe0b9e5a7db3b6a0ead423c767 2023-07-24 23:10:29,816 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 030d5a884d950c2557a29b8f5d09b67d, disabling compactions & flushes 2023-07-24 23:10:29,817 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a5773ebe0b9e5a7db3b6a0ead423c767, disabling compactions & flushes 2023-07-24 23:10:29,817 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690240226217.030d5a884d950c2557a29b8f5d09b67d. 2023-07-24 23:10:29,817 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1690240226217.a5773ebe0b9e5a7db3b6a0ead423c767. 2023-07-24 23:10:29,817 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690240226217.030d5a884d950c2557a29b8f5d09b67d. 2023-07-24 23:10:29,817 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1690240226217.a5773ebe0b9e5a7db3b6a0ead423c767. 2023-07-24 23:10:29,817 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690240226217.030d5a884d950c2557a29b8f5d09b67d. after waiting 0 ms 2023-07-24 23:10:29,817 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1690240226217.a5773ebe0b9e5a7db3b6a0ead423c767. after waiting 0 ms 2023-07-24 23:10:29,817 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690240226217.030d5a884d950c2557a29b8f5d09b67d. 2023-07-24 23:10:29,817 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1690240226217.a5773ebe0b9e5a7db3b6a0ead423c767. 2023-07-24 23:10:29,823 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/a5773ebe0b9e5a7db3b6a0ead423c767/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 23:10:29,827 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1690240226217.a5773ebe0b9e5a7db3b6a0ead423c767. 2023-07-24 23:10:29,827 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a5773ebe0b9e5a7db3b6a0ead423c767: 2023-07-24 23:10:29,827 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/030d5a884d950c2557a29b8f5d09b67d/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 23:10:29,828 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690240226217.030d5a884d950c2557a29b8f5d09b67d. 2023-07-24 23:10:29,828 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 030d5a884d950c2557a29b8f5d09b67d: 2023-07-24 23:10:29,829 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed a5773ebe0b9e5a7db3b6a0ead423c767 2023-07-24 23:10:29,829 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1342cdaaaaef321fc1c8c4dca7995875 2023-07-24 23:10:29,830 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1342cdaaaaef321fc1c8c4dca7995875, disabling compactions & flushes 2023-07-24 23:10:29,830 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1690240226217.1342cdaaaaef321fc1c8c4dca7995875. 2023-07-24 23:10:29,830 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690240226217.1342cdaaaaef321fc1c8c4dca7995875. 2023-07-24 23:10:29,830 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690240226217.1342cdaaaaef321fc1c8c4dca7995875. after waiting 0 ms 2023-07-24 23:10:29,830 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1690240226217.1342cdaaaaef321fc1c8c4dca7995875. 2023-07-24 23:10:29,830 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=a5773ebe0b9e5a7db3b6a0ead423c767, regionState=CLOSED 2023-07-24 23:10:29,831 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1690240226217.a5773ebe0b9e5a7db3b6a0ead423c767.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690240229830"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240229830"}]},"ts":"1690240229830"} 2023-07-24 23:10:29,831 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 030d5a884d950c2557a29b8f5d09b67d 2023-07-24 23:10:29,831 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1d45a257eaaa0c8fececfe733cfc6944 2023-07-24 23:10:29,832 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=44 updating hbase:meta row=030d5a884d950c2557a29b8f5d09b67d, regionState=CLOSED 2023-07-24 23:10:29,832 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690240226217.030d5a884d950c2557a29b8f5d09b67d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690240229832"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240229832"}]},"ts":"1690240229832"} 2023-07-24 23:10:29,836 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1d45a257eaaa0c8fececfe733cfc6944, disabling compactions & flushes 2023-07-24 23:10:29,836 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690240226217.1d45a257eaaa0c8fececfe733cfc6944. 2023-07-24 23:10:29,836 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690240226217.1d45a257eaaa0c8fececfe733cfc6944. 2023-07-24 23:10:29,836 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690240226217.1d45a257eaaa0c8fececfe733cfc6944. after waiting 0 ms 2023-07-24 23:10:29,836 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690240226217.1d45a257eaaa0c8fececfe733cfc6944. 2023-07-24 23:10:29,840 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=49, resume processing ppid=42 2023-07-24 23:10:29,840 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=49, ppid=42, state=SUCCESS; CloseRegionProcedure a5773ebe0b9e5a7db3b6a0ead423c767, server=jenkins-hbase4.apache.org,33649,1690240221185 in 173 msec 2023-07-24 23:10:29,841 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=48, resume processing ppid=44 2023-07-24 23:10:29,841 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=44, state=SUCCESS; CloseRegionProcedure 030d5a884d950c2557a29b8f5d09b67d, server=jenkins-hbase4.apache.org,36981,1690240220580 in 177 msec 2023-07-24 23:10:29,842 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/1342cdaaaaef321fc1c8c4dca7995875/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 23:10:29,843 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a5773ebe0b9e5a7db3b6a0ead423c767, UNASSIGN in 188 msec 2023-07-24 23:10:29,843 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/1d45a257eaaa0c8fececfe733cfc6944/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 23:10:29,844 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=44, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=030d5a884d950c2557a29b8f5d09b67d, UNASSIGN in 189 msec 2023-07-24 23:10:29,844 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1690240226217.1342cdaaaaef321fc1c8c4dca7995875. 2023-07-24 23:10:29,844 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1342cdaaaaef321fc1c8c4dca7995875: 2023-07-24 23:10:29,844 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690240226217.1d45a257eaaa0c8fececfe733cfc6944. 2023-07-24 23:10:29,844 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1d45a257eaaa0c8fececfe733cfc6944: 2023-07-24 23:10:29,845 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1342cdaaaaef321fc1c8c4dca7995875 2023-07-24 23:10:29,845 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 5169c491999325b863c963221245bb28 2023-07-24 23:10:29,846 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5169c491999325b863c963221245bb28, disabling compactions & flushes 2023-07-24 23:10:29,846 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1690240226217.5169c491999325b863c963221245bb28. 2023-07-24 23:10:29,847 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690240226217.5169c491999325b863c963221245bb28. 2023-07-24 23:10:29,847 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690240226217.5169c491999325b863c963221245bb28. after waiting 0 ms 2023-07-24 23:10:29,847 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1690240226217.5169c491999325b863c963221245bb28. 2023-07-24 23:10:29,848 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=1342cdaaaaef321fc1c8c4dca7995875, regionState=CLOSED 2023-07-24 23:10:29,848 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690240226217.1342cdaaaaef321fc1c8c4dca7995875.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690240229848"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240229848"}]},"ts":"1690240229848"} 2023-07-24 23:10:29,849 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1d45a257eaaa0c8fececfe733cfc6944 2023-07-24 23:10:29,850 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=1d45a257eaaa0c8fececfe733cfc6944, regionState=CLOSED 2023-07-24 23:10:29,850 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690240226217.1d45a257eaaa0c8fececfe733cfc6944.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690240229850"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240229850"}]},"ts":"1690240229850"} 2023-07-24 23:10:29,857 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/5169c491999325b863c963221245bb28/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 23:10:29,858 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=51, resume processing ppid=46 2023-07-24 23:10:29,858 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1690240226217.5169c491999325b863c963221245bb28. 2023-07-24 23:10:29,858 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=46, state=SUCCESS; CloseRegionProcedure 1342cdaaaaef321fc1c8c4dca7995875, server=jenkins-hbase4.apache.org,33649,1690240221185 in 184 msec 2023-07-24 23:10:29,858 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5169c491999325b863c963221245bb28: 2023-07-24 23:10:29,861 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 5169c491999325b863c963221245bb28 2023-07-24 23:10:29,862 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=50, resume processing ppid=45 2023-07-24 23:10:29,862 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=45, state=SUCCESS; CloseRegionProcedure 1d45a257eaaa0c8fececfe733cfc6944, server=jenkins-hbase4.apache.org,36981,1690240220580 in 192 msec 2023-07-24 23:10:29,862 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1342cdaaaaef321fc1c8c4dca7995875, UNASSIGN in 206 msec 2023-07-24 23:10:29,862 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=5169c491999325b863c963221245bb28, regionState=CLOSED 2023-07-24 23:10:29,863 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690240226217.5169c491999325b863c963221245bb28.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690240229862"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240229862"}]},"ts":"1690240229862"} 2023-07-24 23:10:29,865 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=45, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1d45a257eaaa0c8fececfe733cfc6944, UNASSIGN in 210 msec 2023-07-24 23:10:29,867 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=47, resume processing ppid=43 2023-07-24 23:10:29,867 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=47, ppid=43, state=SUCCESS; CloseRegionProcedure 5169c491999325b863c963221245bb28, server=jenkins-hbase4.apache.org,33649,1690240221185 in 205 msec 2023-07-24 23:10:29,869 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=43, resume processing ppid=41 2023-07-24 23:10:29,869 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5169c491999325b863c963221245bb28, UNASSIGN in 215 msec 2023-07-24 23:10:29,870 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690240229870"}]},"ts":"1690240229870"} 2023-07-24 23:10:29,872 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-24 23:10:29,874 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-24 23:10:29,877 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=41, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 239 msec 2023-07-24 23:10:29,951 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=41 2023-07-24 23:10:29,952 INFO [Listener at localhost/39785] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 41 completed 2023-07-24 23:10:29,953 INFO [Listener at localhost/39785] client.HBaseAdmin$13(770): Started truncating Group_testTableMoveTruncateAndDrop 2023-07-24 23:10:29,958 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] master.HMaster$6(2260): Client=jenkins//172.31.14.131 truncate Group_testTableMoveTruncateAndDrop 2023-07-24 23:10:29,965 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] procedure2.ProcedureExecutor(1029): Stored pid=52, state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) 2023-07-24 23:10:29,968 DEBUG [PEWorker-2] procedure.TruncateTableProcedure(87): waiting for 'Group_testTableMoveTruncateAndDrop' regions in transition 2023-07-24 23:10:29,970 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-24 23:10:29,982 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5169c491999325b863c963221245bb28 2023-07-24 23:10:29,982 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a5773ebe0b9e5a7db3b6a0ead423c767 2023-07-24 23:10:29,982 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1342cdaaaaef321fc1c8c4dca7995875 2023-07-24 23:10:29,982 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1d45a257eaaa0c8fececfe733cfc6944 2023-07-24 23:10:29,982 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/030d5a884d950c2557a29b8f5d09b67d 2023-07-24 23:10:29,988 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5169c491999325b863c963221245bb28/f, FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5169c491999325b863c963221245bb28/recovered.edits] 2023-07-24 23:10:29,989 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1342cdaaaaef321fc1c8c4dca7995875/f, FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1342cdaaaaef321fc1c8c4dca7995875/recovered.edits] 2023-07-24 23:10:29,999 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1d45a257eaaa0c8fececfe733cfc6944/f, FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1d45a257eaaa0c8fececfe733cfc6944/recovered.edits] 2023-07-24 23:10:29,999 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/030d5a884d950c2557a29b8f5d09b67d/f, FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/030d5a884d950c2557a29b8f5d09b67d/recovered.edits] 2023-07-24 23:10:30,000 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a5773ebe0b9e5a7db3b6a0ead423c767/f, FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a5773ebe0b9e5a7db3b6a0ead423c767/recovered.edits] 2023-07-24 23:10:30,015 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1342cdaaaaef321fc1c8c4dca7995875/recovered.edits/7.seqid to hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/archive/data/default/Group_testTableMoveTruncateAndDrop/1342cdaaaaef321fc1c8c4dca7995875/recovered.edits/7.seqid 2023-07-24 23:10:30,017 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1342cdaaaaef321fc1c8c4dca7995875 2023-07-24 23:10:30,021 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1d45a257eaaa0c8fececfe733cfc6944/recovered.edits/7.seqid to hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/archive/data/default/Group_testTableMoveTruncateAndDrop/1d45a257eaaa0c8fececfe733cfc6944/recovered.edits/7.seqid 2023-07-24 23:10:30,022 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1d45a257eaaa0c8fececfe733cfc6944 2023-07-24 23:10:30,023 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/030d5a884d950c2557a29b8f5d09b67d/recovered.edits/7.seqid to hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/archive/data/default/Group_testTableMoveTruncateAndDrop/030d5a884d950c2557a29b8f5d09b67d/recovered.edits/7.seqid 2023-07-24 23:10:30,024 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5169c491999325b863c963221245bb28/recovered.edits/7.seqid to hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/archive/data/default/Group_testTableMoveTruncateAndDrop/5169c491999325b863c963221245bb28/recovered.edits/7.seqid 2023-07-24 23:10:30,025 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/030d5a884d950c2557a29b8f5d09b67d 2023-07-24 23:10:30,025 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5169c491999325b863c963221245bb28 2023-07-24 23:10:30,027 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a5773ebe0b9e5a7db3b6a0ead423c767/recovered.edits/7.seqid to hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/archive/data/default/Group_testTableMoveTruncateAndDrop/a5773ebe0b9e5a7db3b6a0ead423c767/recovered.edits/7.seqid 2023-07-24 23:10:30,028 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a5773ebe0b9e5a7db3b6a0ead423c767 2023-07-24 23:10:30,028 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-24 23:10:30,059 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-24 23:10:30,067 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-24 23:10:30,068 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-24 23:10:30,068 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1690240226217.a5773ebe0b9e5a7db3b6a0ead423c767.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690240230068"}]},"ts":"9223372036854775807"} 2023-07-24 23:10:30,068 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690240226217.5169c491999325b863c963221245bb28.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690240230068"}]},"ts":"9223372036854775807"} 2023-07-24 23:10:30,068 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690240226217.030d5a884d950c2557a29b8f5d09b67d.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690240230068"}]},"ts":"9223372036854775807"} 2023-07-24 23:10:30,068 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690240226217.1d45a257eaaa0c8fececfe733cfc6944.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690240230068"}]},"ts":"9223372036854775807"} 2023-07-24 23:10:30,068 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690240226217.1342cdaaaaef321fc1c8c4dca7995875.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690240230068"}]},"ts":"9223372036854775807"} 2023-07-24 23:10:30,072 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-24 23:10:30,074 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-24 23:10:30,074 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => a5773ebe0b9e5a7db3b6a0ead423c767, NAME => 'Group_testTableMoveTruncateAndDrop,,1690240226217.a5773ebe0b9e5a7db3b6a0ead423c767.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 5169c491999325b863c963221245bb28, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690240226217.5169c491999325b863c963221245bb28.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 030d5a884d950c2557a29b8f5d09b67d, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690240226217.030d5a884d950c2557a29b8f5d09b67d.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 1d45a257eaaa0c8fececfe733cfc6944, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690240226217.1d45a257eaaa0c8fececfe733cfc6944.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 1342cdaaaaef321fc1c8c4dca7995875, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690240226217.1342cdaaaaef321fc1c8c4dca7995875.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-24 23:10:30,074 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-24 23:10:30,074 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690240230074"}]},"ts":"9223372036854775807"} 2023-07-24 23:10:30,077 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-24 23:10:30,089 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/72b6613191f6b7808d16695767d588b6 2023-07-24 23:10:30,089 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a6e51d6b60782c242a9159439a460645 2023-07-24 23:10:30,089 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/692833e921139cec2eb1de34ad198063 2023-07-24 23:10:30,089 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/05ed8c45bb718b32734a1a7aa2821911 2023-07-24 23:10:30,089 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f870eace70bec3ba9e6c235e00c9aa66 2023-07-24 23:10:30,090 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/72b6613191f6b7808d16695767d588b6 empty. 2023-07-24 23:10:30,090 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/692833e921139cec2eb1de34ad198063 empty. 2023-07-24 23:10:30,090 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a6e51d6b60782c242a9159439a460645 empty. 2023-07-24 23:10:30,090 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f870eace70bec3ba9e6c235e00c9aa66 empty. 2023-07-24 23:10:30,090 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/05ed8c45bb718b32734a1a7aa2821911 empty. 2023-07-24 23:10:30,091 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/72b6613191f6b7808d16695767d588b6 2023-07-24 23:10:30,092 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/05ed8c45bb718b32734a1a7aa2821911 2023-07-24 23:10:30,092 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/692833e921139cec2eb1de34ad198063 2023-07-24 23:10:30,092 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a6e51d6b60782c242a9159439a460645 2023-07-24 23:10:30,092 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f870eace70bec3ba9e6c235e00c9aa66 2023-07-24 23:10:30,092 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-24 23:10:30,134 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-24 23:10:30,137 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 72b6613191f6b7808d16695767d588b6, NAME => 'Group_testTableMoveTruncateAndDrop,,1690240230031.72b6613191f6b7808d16695767d588b6.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp 2023-07-24 23:10:30,139 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 05ed8c45bb718b32734a1a7aa2821911, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690240230031.05ed8c45bb718b32734a1a7aa2821911.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp 2023-07-24 23:10:30,140 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => f870eace70bec3ba9e6c235e00c9aa66, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690240230031.f870eace70bec3ba9e6c235e00c9aa66.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp 2023-07-24 23:10:30,211 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1690240230031.05ed8c45bb718b32734a1a7aa2821911.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:30,212 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 05ed8c45bb718b32734a1a7aa2821911, disabling compactions & flushes 2023-07-24 23:10:30,212 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1690240230031.05ed8c45bb718b32734a1a7aa2821911. 2023-07-24 23:10:30,212 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690240230031.05ed8c45bb718b32734a1a7aa2821911. 2023-07-24 23:10:30,212 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690240230031.05ed8c45bb718b32734a1a7aa2821911. after waiting 0 ms 2023-07-24 23:10:30,212 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1690240230031.05ed8c45bb718b32734a1a7aa2821911. 2023-07-24 23:10:30,212 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1690240230031.05ed8c45bb718b32734a1a7aa2821911. 2023-07-24 23:10:30,212 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 05ed8c45bb718b32734a1a7aa2821911: 2023-07-24 23:10:30,213 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 692833e921139cec2eb1de34ad198063, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690240230031.692833e921139cec2eb1de34ad198063.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp 2023-07-24 23:10:30,217 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690240230031.f870eace70bec3ba9e6c235e00c9aa66.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:30,217 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing f870eace70bec3ba9e6c235e00c9aa66, disabling compactions & flushes 2023-07-24 23:10:30,218 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690240230031.f870eace70bec3ba9e6c235e00c9aa66. 2023-07-24 23:10:30,218 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690240230031.f870eace70bec3ba9e6c235e00c9aa66. 2023-07-24 23:10:30,218 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690240230031.f870eace70bec3ba9e6c235e00c9aa66. after waiting 0 ms 2023-07-24 23:10:30,218 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690240230031.f870eace70bec3ba9e6c235e00c9aa66. 2023-07-24 23:10:30,218 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690240230031.f870eace70bec3ba9e6c235e00c9aa66. 2023-07-24 23:10:30,218 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for f870eace70bec3ba9e6c235e00c9aa66: 2023-07-24 23:10:30,218 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => a6e51d6b60782c242a9159439a460645, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690240230031.a6e51d6b60782c242a9159439a460645.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp 2023-07-24 23:10:30,248 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690240230031.692833e921139cec2eb1de34ad198063.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:30,248 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 692833e921139cec2eb1de34ad198063, disabling compactions & flushes 2023-07-24 23:10:30,249 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690240230031.692833e921139cec2eb1de34ad198063. 2023-07-24 23:10:30,249 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690240230031.692833e921139cec2eb1de34ad198063. 2023-07-24 23:10:30,249 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690240230031.692833e921139cec2eb1de34ad198063. after waiting 0 ms 2023-07-24 23:10:30,249 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690240230031.692833e921139cec2eb1de34ad198063. 2023-07-24 23:10:30,249 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690240230031.692833e921139cec2eb1de34ad198063. 2023-07-24 23:10:30,250 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 692833e921139cec2eb1de34ad198063: 2023-07-24 23:10:30,254 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1690240230031.a6e51d6b60782c242a9159439a460645.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:30,254 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing a6e51d6b60782c242a9159439a460645, disabling compactions & flushes 2023-07-24 23:10:30,254 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1690240230031.a6e51d6b60782c242a9159439a460645. 2023-07-24 23:10:30,254 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690240230031.a6e51d6b60782c242a9159439a460645. 2023-07-24 23:10:30,254 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690240230031.a6e51d6b60782c242a9159439a460645. after waiting 0 ms 2023-07-24 23:10:30,254 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1690240230031.a6e51d6b60782c242a9159439a460645. 2023-07-24 23:10:30,254 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1690240230031.a6e51d6b60782c242a9159439a460645. 2023-07-24 23:10:30,254 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for a6e51d6b60782c242a9159439a460645: 2023-07-24 23:10:30,273 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-24 23:10:30,581 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-24 23:10:30,611 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1690240230031.72b6613191f6b7808d16695767d588b6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:30,611 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 72b6613191f6b7808d16695767d588b6, disabling compactions & flushes 2023-07-24 23:10:30,612 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1690240230031.72b6613191f6b7808d16695767d588b6. 2023-07-24 23:10:30,612 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1690240230031.72b6613191f6b7808d16695767d588b6. 2023-07-24 23:10:30,612 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1690240230031.72b6613191f6b7808d16695767d588b6. after waiting 0 ms 2023-07-24 23:10:30,612 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1690240230031.72b6613191f6b7808d16695767d588b6. 2023-07-24 23:10:30,612 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1690240230031.72b6613191f6b7808d16695767d588b6. 2023-07-24 23:10:30,612 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 72b6613191f6b7808d16695767d588b6: 2023-07-24 23:10:30,616 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690240230031.05ed8c45bb718b32734a1a7aa2821911.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690240230616"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240230616"}]},"ts":"1690240230616"} 2023-07-24 23:10:30,617 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690240230031.f870eace70bec3ba9e6c235e00c9aa66.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690240230616"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240230616"}]},"ts":"1690240230616"} 2023-07-24 23:10:30,617 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690240230031.692833e921139cec2eb1de34ad198063.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690240230616"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240230616"}]},"ts":"1690240230616"} 2023-07-24 23:10:30,617 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690240230031.a6e51d6b60782c242a9159439a460645.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690240230616"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240230616"}]},"ts":"1690240230616"} 2023-07-24 23:10:30,617 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1690240230031.72b6613191f6b7808d16695767d588b6.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690240230616"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240230616"}]},"ts":"1690240230616"} 2023-07-24 23:10:30,621 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-24 23:10:30,622 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690240230622"}]},"ts":"1690240230622"} 2023-07-24 23:10:30,624 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-24 23:10:30,629 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 23:10:30,629 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 23:10:30,629 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 23:10:30,629 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 23:10:30,632 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=53, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=72b6613191f6b7808d16695767d588b6, ASSIGN}, {pid=54, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=05ed8c45bb718b32734a1a7aa2821911, ASSIGN}, {pid=55, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f870eace70bec3ba9e6c235e00c9aa66, ASSIGN}, {pid=56, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=692833e921139cec2eb1de34ad198063, ASSIGN}, {pid=57, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a6e51d6b60782c242a9159439a460645, ASSIGN}] 2023-07-24 23:10:30,635 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=53, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=72b6613191f6b7808d16695767d588b6, ASSIGN 2023-07-24 23:10:30,635 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=54, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=05ed8c45bb718b32734a1a7aa2821911, ASSIGN 2023-07-24 23:10:30,635 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=55, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f870eace70bec3ba9e6c235e00c9aa66, ASSIGN 2023-07-24 23:10:30,636 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=56, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=692833e921139cec2eb1de34ad198063, ASSIGN 2023-07-24 23:10:30,636 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=57, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a6e51d6b60782c242a9159439a460645, ASSIGN 2023-07-24 23:10:30,637 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=54, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=05ed8c45bb718b32734a1a7aa2821911, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33649,1690240221185; forceNewPlan=false, retain=false 2023-07-24 23:10:30,637 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=55, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f870eace70bec3ba9e6c235e00c9aa66, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,36981,1690240220580; forceNewPlan=false, retain=false 2023-07-24 23:10:30,637 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=53, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=72b6613191f6b7808d16695767d588b6, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33649,1690240221185; forceNewPlan=false, retain=false 2023-07-24 23:10:30,638 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=56, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=692833e921139cec2eb1de34ad198063, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33649,1690240221185; forceNewPlan=false, retain=false 2023-07-24 23:10:30,638 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=57, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a6e51d6b60782c242a9159439a460645, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,36981,1690240220580; forceNewPlan=false, retain=false 2023-07-24 23:10:30,788 INFO [jenkins-hbase4:42959] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-24 23:10:30,792 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=55 updating hbase:meta row=f870eace70bec3ba9e6c235e00c9aa66, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36981,1690240220580 2023-07-24 23:10:30,792 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=a6e51d6b60782c242a9159439a460645, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36981,1690240220580 2023-07-24 23:10:30,793 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690240230031.f870eace70bec3ba9e6c235e00c9aa66.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690240230792"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240230792"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240230792"}]},"ts":"1690240230792"} 2023-07-24 23:10:30,792 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=692833e921139cec2eb1de34ad198063, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33649,1690240221185 2023-07-24 23:10:30,793 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690240230031.692833e921139cec2eb1de34ad198063.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690240230792"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240230792"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240230792"}]},"ts":"1690240230792"} 2023-07-24 23:10:30,793 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690240230031.a6e51d6b60782c242a9159439a460645.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690240230792"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240230792"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240230792"}]},"ts":"1690240230792"} 2023-07-24 23:10:30,793 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=72b6613191f6b7808d16695767d588b6, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33649,1690240221185 2023-07-24 23:10:30,793 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690240230031.72b6613191f6b7808d16695767d588b6.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690240230793"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240230793"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240230793"}]},"ts":"1690240230793"} 2023-07-24 23:10:30,792 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=05ed8c45bb718b32734a1a7aa2821911, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33649,1690240221185 2023-07-24 23:10:30,794 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690240230031.05ed8c45bb718b32734a1a7aa2821911.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690240230792"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240230792"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240230792"}]},"ts":"1690240230792"} 2023-07-24 23:10:30,797 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=58, ppid=55, state=RUNNABLE; OpenRegionProcedure f870eace70bec3ba9e6c235e00c9aa66, server=jenkins-hbase4.apache.org,36981,1690240220580}] 2023-07-24 23:10:30,797 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=59, ppid=56, state=RUNNABLE; OpenRegionProcedure 692833e921139cec2eb1de34ad198063, server=jenkins-hbase4.apache.org,33649,1690240221185}] 2023-07-24 23:10:30,800 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=60, ppid=53, state=RUNNABLE; OpenRegionProcedure 72b6613191f6b7808d16695767d588b6, server=jenkins-hbase4.apache.org,33649,1690240221185}] 2023-07-24 23:10:30,801 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=61, ppid=57, state=RUNNABLE; OpenRegionProcedure a6e51d6b60782c242a9159439a460645, server=jenkins-hbase4.apache.org,36981,1690240220580}] 2023-07-24 23:10:30,803 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=62, ppid=54, state=RUNNABLE; OpenRegionProcedure 05ed8c45bb718b32734a1a7aa2821911, server=jenkins-hbase4.apache.org,33649,1690240221185}] 2023-07-24 23:10:30,956 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1690240230031.72b6613191f6b7808d16695767d588b6. 2023-07-24 23:10:30,956 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 72b6613191f6b7808d16695767d588b6, NAME => 'Group_testTableMoveTruncateAndDrop,,1690240230031.72b6613191f6b7808d16695767d588b6.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-24 23:10:30,957 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 72b6613191f6b7808d16695767d588b6 2023-07-24 23:10:30,957 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1690240230031.72b6613191f6b7808d16695767d588b6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:30,957 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 72b6613191f6b7808d16695767d588b6 2023-07-24 23:10:30,957 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 72b6613191f6b7808d16695767d588b6 2023-07-24 23:10:30,957 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1690240230031.a6e51d6b60782c242a9159439a460645. 2023-07-24 23:10:30,958 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a6e51d6b60782c242a9159439a460645, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690240230031.a6e51d6b60782c242a9159439a460645.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-24 23:10:30,958 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop a6e51d6b60782c242a9159439a460645 2023-07-24 23:10:30,958 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1690240230031.a6e51d6b60782c242a9159439a460645.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:30,958 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a6e51d6b60782c242a9159439a460645 2023-07-24 23:10:30,958 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a6e51d6b60782c242a9159439a460645 2023-07-24 23:10:30,961 INFO [StoreOpener-a6e51d6b60782c242a9159439a460645-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region a6e51d6b60782c242a9159439a460645 2023-07-24 23:10:30,963 INFO [StoreOpener-72b6613191f6b7808d16695767d588b6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 72b6613191f6b7808d16695767d588b6 2023-07-24 23:10:30,963 DEBUG [StoreOpener-a6e51d6b60782c242a9159439a460645-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/a6e51d6b60782c242a9159439a460645/f 2023-07-24 23:10:30,963 DEBUG [StoreOpener-a6e51d6b60782c242a9159439a460645-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/a6e51d6b60782c242a9159439a460645/f 2023-07-24 23:10:30,964 INFO [StoreOpener-a6e51d6b60782c242a9159439a460645-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a6e51d6b60782c242a9159439a460645 columnFamilyName f 2023-07-24 23:10:30,964 INFO [StoreOpener-a6e51d6b60782c242a9159439a460645-1] regionserver.HStore(310): Store=a6e51d6b60782c242a9159439a460645/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:30,967 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/a6e51d6b60782c242a9159439a460645 2023-07-24 23:10:30,967 DEBUG [StoreOpener-72b6613191f6b7808d16695767d588b6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/72b6613191f6b7808d16695767d588b6/f 2023-07-24 23:10:30,967 DEBUG [StoreOpener-72b6613191f6b7808d16695767d588b6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/72b6613191f6b7808d16695767d588b6/f 2023-07-24 23:10:30,967 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/a6e51d6b60782c242a9159439a460645 2023-07-24 23:10:30,967 INFO [StoreOpener-72b6613191f6b7808d16695767d588b6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 72b6613191f6b7808d16695767d588b6 columnFamilyName f 2023-07-24 23:10:30,969 INFO [StoreOpener-72b6613191f6b7808d16695767d588b6-1] regionserver.HStore(310): Store=72b6613191f6b7808d16695767d588b6/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:30,971 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/72b6613191f6b7808d16695767d588b6 2023-07-24 23:10:30,972 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/72b6613191f6b7808d16695767d588b6 2023-07-24 23:10:30,972 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a6e51d6b60782c242a9159439a460645 2023-07-24 23:10:30,976 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 72b6613191f6b7808d16695767d588b6 2023-07-24 23:10:30,986 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/a6e51d6b60782c242a9159439a460645/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 23:10:30,987 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/72b6613191f6b7808d16695767d588b6/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 23:10:30,987 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a6e51d6b60782c242a9159439a460645; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10487143840, jitterRate=-0.02330861985683441}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 23:10:30,987 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a6e51d6b60782c242a9159439a460645: 2023-07-24 23:10:30,988 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 72b6613191f6b7808d16695767d588b6; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10895810080, jitterRate=0.014751389622688293}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 23:10:30,988 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 72b6613191f6b7808d16695767d588b6: 2023-07-24 23:10:30,989 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1690240230031.a6e51d6b60782c242a9159439a460645., pid=61, masterSystemTime=1690240230951 2023-07-24 23:10:30,989 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1690240230031.72b6613191f6b7808d16695767d588b6., pid=60, masterSystemTime=1690240230952 2023-07-24 23:10:30,991 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1690240230031.72b6613191f6b7808d16695767d588b6. 2023-07-24 23:10:30,991 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1690240230031.72b6613191f6b7808d16695767d588b6. 2023-07-24 23:10:30,991 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1690240230031.05ed8c45bb718b32734a1a7aa2821911. 2023-07-24 23:10:30,992 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=72b6613191f6b7808d16695767d588b6, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33649,1690240221185 2023-07-24 23:10:30,992 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 05ed8c45bb718b32734a1a7aa2821911, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690240230031.05ed8c45bb718b32734a1a7aa2821911.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-24 23:10:30,992 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1690240230031.a6e51d6b60782c242a9159439a460645. 2023-07-24 23:10:30,992 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1690240230031.a6e51d6b60782c242a9159439a460645. 2023-07-24 23:10:30,992 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1690240230031.72b6613191f6b7808d16695767d588b6.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690240230991"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690240230991"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690240230991"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690240230991"}]},"ts":"1690240230991"} 2023-07-24 23:10:30,992 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 05ed8c45bb718b32734a1a7aa2821911 2023-07-24 23:10:30,992 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690240230031.f870eace70bec3ba9e6c235e00c9aa66. 2023-07-24 23:10:30,992 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1690240230031.05ed8c45bb718b32734a1a7aa2821911.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:30,993 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f870eace70bec3ba9e6c235e00c9aa66, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690240230031.f870eace70bec3ba9e6c235e00c9aa66.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-24 23:10:30,993 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=a6e51d6b60782c242a9159439a460645, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,36981,1690240220580 2023-07-24 23:10:30,993 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 05ed8c45bb718b32734a1a7aa2821911 2023-07-24 23:10:30,993 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 05ed8c45bb718b32734a1a7aa2821911 2023-07-24 23:10:30,993 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690240230031.a6e51d6b60782c242a9159439a460645.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690240230993"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690240230993"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690240230993"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690240230993"}]},"ts":"1690240230993"} 2023-07-24 23:10:30,993 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop f870eace70bec3ba9e6c235e00c9aa66 2023-07-24 23:10:30,993 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690240230031.f870eace70bec3ba9e6c235e00c9aa66.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:30,993 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f870eace70bec3ba9e6c235e00c9aa66 2023-07-24 23:10:30,993 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f870eace70bec3ba9e6c235e00c9aa66 2023-07-24 23:10:30,995 INFO [StoreOpener-f870eace70bec3ba9e6c235e00c9aa66-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f870eace70bec3ba9e6c235e00c9aa66 2023-07-24 23:10:30,997 DEBUG [StoreOpener-f870eace70bec3ba9e6c235e00c9aa66-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/f870eace70bec3ba9e6c235e00c9aa66/f 2023-07-24 23:10:30,997 DEBUG [StoreOpener-f870eace70bec3ba9e6c235e00c9aa66-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/f870eace70bec3ba9e6c235e00c9aa66/f 2023-07-24 23:10:30,998 INFO [StoreOpener-f870eace70bec3ba9e6c235e00c9aa66-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f870eace70bec3ba9e6c235e00c9aa66 columnFamilyName f 2023-07-24 23:10:30,998 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=60, resume processing ppid=53 2023-07-24 23:10:30,998 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=60, ppid=53, state=SUCCESS; OpenRegionProcedure 72b6613191f6b7808d16695767d588b6, server=jenkins-hbase4.apache.org,33649,1690240221185 in 195 msec 2023-07-24 23:10:30,999 INFO [StoreOpener-f870eace70bec3ba9e6c235e00c9aa66-1] regionserver.HStore(310): Store=f870eace70bec3ba9e6c235e00c9aa66/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:31,000 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=61, resume processing ppid=57 2023-07-24 23:10:31,000 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=57, state=SUCCESS; OpenRegionProcedure a6e51d6b60782c242a9159439a460645, server=jenkins-hbase4.apache.org,36981,1690240220580 in 195 msec 2023-07-24 23:10:31,000 INFO [StoreOpener-05ed8c45bb718b32734a1a7aa2821911-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 05ed8c45bb718b32734a1a7aa2821911 2023-07-24 23:10:31,000 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/f870eace70bec3ba9e6c235e00c9aa66 2023-07-24 23:10:31,001 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/f870eace70bec3ba9e6c235e00c9aa66 2023-07-24 23:10:31,001 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=72b6613191f6b7808d16695767d588b6, ASSIGN in 369 msec 2023-07-24 23:10:31,002 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a6e51d6b60782c242a9159439a460645, ASSIGN in 368 msec 2023-07-24 23:10:31,003 DEBUG [StoreOpener-05ed8c45bb718b32734a1a7aa2821911-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/05ed8c45bb718b32734a1a7aa2821911/f 2023-07-24 23:10:31,003 DEBUG [StoreOpener-05ed8c45bb718b32734a1a7aa2821911-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/05ed8c45bb718b32734a1a7aa2821911/f 2023-07-24 23:10:31,003 INFO [StoreOpener-05ed8c45bb718b32734a1a7aa2821911-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 05ed8c45bb718b32734a1a7aa2821911 columnFamilyName f 2023-07-24 23:10:31,004 INFO [StoreOpener-05ed8c45bb718b32734a1a7aa2821911-1] regionserver.HStore(310): Store=05ed8c45bb718b32734a1a7aa2821911/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:31,005 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f870eace70bec3ba9e6c235e00c9aa66 2023-07-24 23:10:31,005 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/05ed8c45bb718b32734a1a7aa2821911 2023-07-24 23:10:31,006 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/05ed8c45bb718b32734a1a7aa2821911 2023-07-24 23:10:31,008 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/f870eace70bec3ba9e6c235e00c9aa66/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 23:10:31,008 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f870eace70bec3ba9e6c235e00c9aa66; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10312644640, jitterRate=-0.03956012427806854}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 23:10:31,008 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f870eace70bec3ba9e6c235e00c9aa66: 2023-07-24 23:10:31,009 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690240230031.f870eace70bec3ba9e6c235e00c9aa66., pid=58, masterSystemTime=1690240230951 2023-07-24 23:10:31,010 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 05ed8c45bb718b32734a1a7aa2821911 2023-07-24 23:10:31,011 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690240230031.f870eace70bec3ba9e6c235e00c9aa66. 2023-07-24 23:10:31,011 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690240230031.f870eace70bec3ba9e6c235e00c9aa66. 2023-07-24 23:10:31,012 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=55 updating hbase:meta row=f870eace70bec3ba9e6c235e00c9aa66, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,36981,1690240220580 2023-07-24 23:10:31,012 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690240230031.f870eace70bec3ba9e6c235e00c9aa66.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690240231012"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690240231012"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690240231012"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690240231012"}]},"ts":"1690240231012"} 2023-07-24 23:10:31,013 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/05ed8c45bb718b32734a1a7aa2821911/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 23:10:31,013 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 05ed8c45bb718b32734a1a7aa2821911; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9580419680, jitterRate=-0.10775388777256012}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 23:10:31,014 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 05ed8c45bb718b32734a1a7aa2821911: 2023-07-24 23:10:31,014 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1690240230031.05ed8c45bb718b32734a1a7aa2821911., pid=62, masterSystemTime=1690240230952 2023-07-24 23:10:31,016 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1690240230031.05ed8c45bb718b32734a1a7aa2821911. 2023-07-24 23:10:31,016 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1690240230031.05ed8c45bb718b32734a1a7aa2821911. 2023-07-24 23:10:31,016 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690240230031.692833e921139cec2eb1de34ad198063. 2023-07-24 23:10:31,017 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 692833e921139cec2eb1de34ad198063, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690240230031.692833e921139cec2eb1de34ad198063.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-24 23:10:31,017 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 692833e921139cec2eb1de34ad198063 2023-07-24 23:10:31,017 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690240230031.692833e921139cec2eb1de34ad198063.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:31,017 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 692833e921139cec2eb1de34ad198063 2023-07-24 23:10:31,017 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 692833e921139cec2eb1de34ad198063 2023-07-24 23:10:31,018 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=58, resume processing ppid=55 2023-07-24 23:10:31,018 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=05ed8c45bb718b32734a1a7aa2821911, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33649,1690240221185 2023-07-24 23:10:31,018 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690240230031.05ed8c45bb718b32734a1a7aa2821911.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690240231018"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690240231018"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690240231018"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690240231018"}]},"ts":"1690240231018"} 2023-07-24 23:10:31,018 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=58, ppid=55, state=SUCCESS; OpenRegionProcedure f870eace70bec3ba9e6c235e00c9aa66, server=jenkins-hbase4.apache.org,36981,1690240220580 in 217 msec 2023-07-24 23:10:31,019 INFO [StoreOpener-692833e921139cec2eb1de34ad198063-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 692833e921139cec2eb1de34ad198063 2023-07-24 23:10:31,021 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=55, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f870eace70bec3ba9e6c235e00c9aa66, ASSIGN in 386 msec 2023-07-24 23:10:31,021 DEBUG [StoreOpener-692833e921139cec2eb1de34ad198063-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/692833e921139cec2eb1de34ad198063/f 2023-07-24 23:10:31,022 DEBUG [StoreOpener-692833e921139cec2eb1de34ad198063-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/692833e921139cec2eb1de34ad198063/f 2023-07-24 23:10:31,022 INFO [StoreOpener-692833e921139cec2eb1de34ad198063-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 692833e921139cec2eb1de34ad198063 columnFamilyName f 2023-07-24 23:10:31,023 INFO [StoreOpener-692833e921139cec2eb1de34ad198063-1] regionserver.HStore(310): Store=692833e921139cec2eb1de34ad198063/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:31,023 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=62, resume processing ppid=54 2023-07-24 23:10:31,024 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=54, state=SUCCESS; OpenRegionProcedure 05ed8c45bb718b32734a1a7aa2821911, server=jenkins-hbase4.apache.org,33649,1690240221185 in 218 msec 2023-07-24 23:10:31,025 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/692833e921139cec2eb1de34ad198063 2023-07-24 23:10:31,025 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/692833e921139cec2eb1de34ad198063 2023-07-24 23:10:31,025 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=05ed8c45bb718b32734a1a7aa2821911, ASSIGN in 391 msec 2023-07-24 23:10:31,028 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 692833e921139cec2eb1de34ad198063 2023-07-24 23:10:31,030 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/692833e921139cec2eb1de34ad198063/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 23:10:31,031 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 692833e921139cec2eb1de34ad198063; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11992556800, jitterRate=0.11689388751983643}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 23:10:31,031 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 692833e921139cec2eb1de34ad198063: 2023-07-24 23:10:31,032 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690240230031.692833e921139cec2eb1de34ad198063., pid=59, masterSystemTime=1690240230952 2023-07-24 23:10:31,033 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690240230031.692833e921139cec2eb1de34ad198063. 2023-07-24 23:10:31,034 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690240230031.692833e921139cec2eb1de34ad198063. 2023-07-24 23:10:31,034 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=692833e921139cec2eb1de34ad198063, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33649,1690240221185 2023-07-24 23:10:31,034 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690240230031.692833e921139cec2eb1de34ad198063.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690240231034"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690240231034"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690240231034"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690240231034"}]},"ts":"1690240231034"} 2023-07-24 23:10:31,038 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=59, resume processing ppid=56 2023-07-24 23:10:31,038 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=56, state=SUCCESS; OpenRegionProcedure 692833e921139cec2eb1de34ad198063, server=jenkins-hbase4.apache.org,33649,1690240221185 in 239 msec 2023-07-24 23:10:31,041 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=56, resume processing ppid=52 2023-07-24 23:10:31,041 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=692833e921139cec2eb1de34ad198063, ASSIGN in 406 msec 2023-07-24 23:10:31,041 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690240231041"}]},"ts":"1690240231041"} 2023-07-24 23:10:31,043 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-24 23:10:31,045 DEBUG [PEWorker-1] procedure.TruncateTableProcedure(145): truncate 'Group_testTableMoveTruncateAndDrop' completed 2023-07-24 23:10:31,048 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=52, state=SUCCESS; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) in 1.0860 sec 2023-07-24 23:10:31,083 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-24 23:10:31,083 INFO [Listener at localhost/39785] client.HBaseAdmin$TableFuture(3541): Operation: TRUNCATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 52 completed 2023-07-24 23:10:31,084 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1730139615 2023-07-24 23:10:31,085 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:31,086 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1730139615 2023-07-24 23:10:31,086 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:31,087 INFO [Listener at localhost/39785] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-24 23:10:31,087 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-24 23:10:31,089 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] procedure2.ProcedureExecutor(1029): Stored pid=63, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-24 23:10:31,092 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=63 2023-07-24 23:10:31,093 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690240231093"}]},"ts":"1690240231093"} 2023-07-24 23:10:31,095 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-24 23:10:31,098 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-24 23:10:31,099 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=64, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=72b6613191f6b7808d16695767d588b6, UNASSIGN}, {pid=65, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=05ed8c45bb718b32734a1a7aa2821911, UNASSIGN}, {pid=66, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f870eace70bec3ba9e6c235e00c9aa66, UNASSIGN}, {pid=67, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=692833e921139cec2eb1de34ad198063, UNASSIGN}, {pid=68, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a6e51d6b60782c242a9159439a460645, UNASSIGN}] 2023-07-24 23:10:31,102 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=68, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a6e51d6b60782c242a9159439a460645, UNASSIGN 2023-07-24 23:10:31,102 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=67, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=692833e921139cec2eb1de34ad198063, UNASSIGN 2023-07-24 23:10:31,102 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=66, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f870eace70bec3ba9e6c235e00c9aa66, UNASSIGN 2023-07-24 23:10:31,102 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=65, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=05ed8c45bb718b32734a1a7aa2821911, UNASSIGN 2023-07-24 23:10:31,102 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=64, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=72b6613191f6b7808d16695767d588b6, UNASSIGN 2023-07-24 23:10:31,103 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=a6e51d6b60782c242a9159439a460645, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36981,1690240220580 2023-07-24 23:10:31,103 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=692833e921139cec2eb1de34ad198063, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33649,1690240221185 2023-07-24 23:10:31,104 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690240230031.a6e51d6b60782c242a9159439a460645.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690240231103"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240231103"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240231103"}]},"ts":"1690240231103"} 2023-07-24 23:10:31,103 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=66 updating hbase:meta row=f870eace70bec3ba9e6c235e00c9aa66, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36981,1690240220580 2023-07-24 23:10:31,104 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690240230031.692833e921139cec2eb1de34ad198063.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690240231103"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240231103"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240231103"}]},"ts":"1690240231103"} 2023-07-24 23:10:31,104 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690240230031.f870eace70bec3ba9e6c235e00c9aa66.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690240231103"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240231103"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240231103"}]},"ts":"1690240231103"} 2023-07-24 23:10:31,104 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=72b6613191f6b7808d16695767d588b6, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33649,1690240221185 2023-07-24 23:10:31,104 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=05ed8c45bb718b32734a1a7aa2821911, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33649,1690240221185 2023-07-24 23:10:31,104 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690240230031.72b6613191f6b7808d16695767d588b6.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690240231104"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240231104"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240231104"}]},"ts":"1690240231104"} 2023-07-24 23:10:31,104 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690240230031.05ed8c45bb718b32734a1a7aa2821911.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690240231104"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240231104"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240231104"}]},"ts":"1690240231104"} 2023-07-24 23:10:31,106 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=69, ppid=68, state=RUNNABLE; CloseRegionProcedure a6e51d6b60782c242a9159439a460645, server=jenkins-hbase4.apache.org,36981,1690240220580}] 2023-07-24 23:10:31,107 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=70, ppid=67, state=RUNNABLE; CloseRegionProcedure 692833e921139cec2eb1de34ad198063, server=jenkins-hbase4.apache.org,33649,1690240221185}] 2023-07-24 23:10:31,108 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=71, ppid=66, state=RUNNABLE; CloseRegionProcedure f870eace70bec3ba9e6c235e00c9aa66, server=jenkins-hbase4.apache.org,36981,1690240220580}] 2023-07-24 23:10:31,109 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=72, ppid=64, state=RUNNABLE; CloseRegionProcedure 72b6613191f6b7808d16695767d588b6, server=jenkins-hbase4.apache.org,33649,1690240221185}] 2023-07-24 23:10:31,110 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=73, ppid=65, state=RUNNABLE; CloseRegionProcedure 05ed8c45bb718b32734a1a7aa2821911, server=jenkins-hbase4.apache.org,33649,1690240221185}] 2023-07-24 23:10:31,194 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=63 2023-07-24 23:10:31,260 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f870eace70bec3ba9e6c235e00c9aa66 2023-07-24 23:10:31,261 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f870eace70bec3ba9e6c235e00c9aa66, disabling compactions & flushes 2023-07-24 23:10:31,262 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690240230031.f870eace70bec3ba9e6c235e00c9aa66. 2023-07-24 23:10:31,262 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690240230031.f870eace70bec3ba9e6c235e00c9aa66. 2023-07-24 23:10:31,262 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690240230031.f870eace70bec3ba9e6c235e00c9aa66. after waiting 0 ms 2023-07-24 23:10:31,262 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690240230031.f870eace70bec3ba9e6c235e00c9aa66. 2023-07-24 23:10:31,265 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 692833e921139cec2eb1de34ad198063 2023-07-24 23:10:31,266 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 692833e921139cec2eb1de34ad198063, disabling compactions & flushes 2023-07-24 23:10:31,267 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690240230031.692833e921139cec2eb1de34ad198063. 2023-07-24 23:10:31,267 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690240230031.692833e921139cec2eb1de34ad198063. 2023-07-24 23:10:31,267 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690240230031.692833e921139cec2eb1de34ad198063. after waiting 0 ms 2023-07-24 23:10:31,267 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690240230031.692833e921139cec2eb1de34ad198063. 2023-07-24 23:10:31,267 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/f870eace70bec3ba9e6c235e00c9aa66/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 23:10:31,268 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690240230031.f870eace70bec3ba9e6c235e00c9aa66. 2023-07-24 23:10:31,268 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f870eace70bec3ba9e6c235e00c9aa66: 2023-07-24 23:10:31,271 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f870eace70bec3ba9e6c235e00c9aa66 2023-07-24 23:10:31,271 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close a6e51d6b60782c242a9159439a460645 2023-07-24 23:10:31,272 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a6e51d6b60782c242a9159439a460645, disabling compactions & flushes 2023-07-24 23:10:31,272 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1690240230031.a6e51d6b60782c242a9159439a460645. 2023-07-24 23:10:31,272 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690240230031.a6e51d6b60782c242a9159439a460645. 2023-07-24 23:10:31,272 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690240230031.a6e51d6b60782c242a9159439a460645. after waiting 0 ms 2023-07-24 23:10:31,272 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1690240230031.a6e51d6b60782c242a9159439a460645. 2023-07-24 23:10:31,273 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/692833e921139cec2eb1de34ad198063/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 23:10:31,273 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=66 updating hbase:meta row=f870eace70bec3ba9e6c235e00c9aa66, regionState=CLOSED 2023-07-24 23:10:31,273 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690240230031.f870eace70bec3ba9e6c235e00c9aa66.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690240231273"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240231273"}]},"ts":"1690240231273"} 2023-07-24 23:10:31,274 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690240230031.692833e921139cec2eb1de34ad198063. 2023-07-24 23:10:31,274 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 692833e921139cec2eb1de34ad198063: 2023-07-24 23:10:31,276 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 692833e921139cec2eb1de34ad198063 2023-07-24 23:10:31,276 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 05ed8c45bb718b32734a1a7aa2821911 2023-07-24 23:10:31,277 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 05ed8c45bb718b32734a1a7aa2821911, disabling compactions & flushes 2023-07-24 23:10:31,277 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1690240230031.05ed8c45bb718b32734a1a7aa2821911. 2023-07-24 23:10:31,277 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690240230031.05ed8c45bb718b32734a1a7aa2821911. 2023-07-24 23:10:31,277 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690240230031.05ed8c45bb718b32734a1a7aa2821911. after waiting 0 ms 2023-07-24 23:10:31,277 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1690240230031.05ed8c45bb718b32734a1a7aa2821911. 2023-07-24 23:10:31,277 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=692833e921139cec2eb1de34ad198063, regionState=CLOSED 2023-07-24 23:10:31,277 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690240230031.692833e921139cec2eb1de34ad198063.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690240231277"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240231277"}]},"ts":"1690240231277"} 2023-07-24 23:10:31,279 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/a6e51d6b60782c242a9159439a460645/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 23:10:31,280 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1690240230031.a6e51d6b60782c242a9159439a460645. 2023-07-24 23:10:31,280 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a6e51d6b60782c242a9159439a460645: 2023-07-24 23:10:31,281 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=71, resume processing ppid=66 2023-07-24 23:10:31,281 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=71, ppid=66, state=SUCCESS; CloseRegionProcedure f870eace70bec3ba9e6c235e00c9aa66, server=jenkins-hbase4.apache.org,36981,1690240220580 in 167 msec 2023-07-24 23:10:31,283 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/05ed8c45bb718b32734a1a7aa2821911/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 23:10:31,283 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=70, resume processing ppid=67 2023-07-24 23:10:31,284 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=70, ppid=67, state=SUCCESS; CloseRegionProcedure 692833e921139cec2eb1de34ad198063, server=jenkins-hbase4.apache.org,33649,1690240221185 in 172 msec 2023-07-24 23:10:31,284 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1690240230031.05ed8c45bb718b32734a1a7aa2821911. 2023-07-24 23:10:31,284 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 05ed8c45bb718b32734a1a7aa2821911: 2023-07-24 23:10:31,284 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed a6e51d6b60782c242a9159439a460645 2023-07-24 23:10:31,284 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=66, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f870eace70bec3ba9e6c235e00c9aa66, UNASSIGN in 182 msec 2023-07-24 23:10:31,284 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=a6e51d6b60782c242a9159439a460645, regionState=CLOSED 2023-07-24 23:10:31,284 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690240230031.a6e51d6b60782c242a9159439a460645.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690240231284"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240231284"}]},"ts":"1690240231284"} 2023-07-24 23:10:31,291 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=67, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=692833e921139cec2eb1de34ad198063, UNASSIGN in 185 msec 2023-07-24 23:10:31,291 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 05ed8c45bb718b32734a1a7aa2821911 2023-07-24 23:10:31,291 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 72b6613191f6b7808d16695767d588b6 2023-07-24 23:10:31,292 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 72b6613191f6b7808d16695767d588b6, disabling compactions & flushes 2023-07-24 23:10:31,293 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1690240230031.72b6613191f6b7808d16695767d588b6. 2023-07-24 23:10:31,293 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1690240230031.72b6613191f6b7808d16695767d588b6. 2023-07-24 23:10:31,293 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1690240230031.72b6613191f6b7808d16695767d588b6. after waiting 0 ms 2023-07-24 23:10:31,293 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1690240230031.72b6613191f6b7808d16695767d588b6. 2023-07-24 23:10:31,293 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=05ed8c45bb718b32734a1a7aa2821911, regionState=CLOSED 2023-07-24 23:10:31,293 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690240230031.05ed8c45bb718b32734a1a7aa2821911.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690240231293"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240231293"}]},"ts":"1690240231293"} 2023-07-24 23:10:31,294 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=69, resume processing ppid=68 2023-07-24 23:10:31,294 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=69, ppid=68, state=SUCCESS; CloseRegionProcedure a6e51d6b60782c242a9159439a460645, server=jenkins-hbase4.apache.org,36981,1690240220580 in 180 msec 2023-07-24 23:10:31,299 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=68, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a6e51d6b60782c242a9159439a460645, UNASSIGN in 195 msec 2023-07-24 23:10:31,301 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testTableMoveTruncateAndDrop/72b6613191f6b7808d16695767d588b6/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 23:10:31,302 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1690240230031.72b6613191f6b7808d16695767d588b6. 2023-07-24 23:10:31,302 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 72b6613191f6b7808d16695767d588b6: 2023-07-24 23:10:31,307 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=73, resume processing ppid=65 2023-07-24 23:10:31,307 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=73, ppid=65, state=SUCCESS; CloseRegionProcedure 05ed8c45bb718b32734a1a7aa2821911, server=jenkins-hbase4.apache.org,33649,1690240221185 in 185 msec 2023-07-24 23:10:31,311 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 72b6613191f6b7808d16695767d588b6 2023-07-24 23:10:31,312 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=05ed8c45bb718b32734a1a7aa2821911, UNASSIGN in 208 msec 2023-07-24 23:10:31,312 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=72b6613191f6b7808d16695767d588b6, regionState=CLOSED 2023-07-24 23:10:31,312 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1690240230031.72b6613191f6b7808d16695767d588b6.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690240231312"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240231312"}]},"ts":"1690240231312"} 2023-07-24 23:10:31,316 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=72, resume processing ppid=64 2023-07-24 23:10:31,316 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=72, ppid=64, state=SUCCESS; CloseRegionProcedure 72b6613191f6b7808d16695767d588b6, server=jenkins-hbase4.apache.org,33649,1690240221185 in 204 msec 2023-07-24 23:10:31,318 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=64, resume processing ppid=63 2023-07-24 23:10:31,318 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=72b6613191f6b7808d16695767d588b6, UNASSIGN in 217 msec 2023-07-24 23:10:31,319 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690240231319"}]},"ts":"1690240231319"} 2023-07-24 23:10:31,320 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-24 23:10:31,322 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-24 23:10:31,324 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=63, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 235 msec 2023-07-24 23:10:31,395 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=63 2023-07-24 23:10:31,396 INFO [Listener at localhost/39785] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 63 completed 2023-07-24 23:10:31,402 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testTableMoveTruncateAndDrop 2023-07-24 23:10:31,409 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] procedure2.ProcedureExecutor(1029): Stored pid=74, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-24 23:10:31,412 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=74, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-24 23:10:31,412 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testTableMoveTruncateAndDrop' from rsgroup 'Group_testTableMoveTruncateAndDrop_1730139615' 2023-07-24 23:10:31,415 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=74, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-24 23:10:31,415 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:31,416 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:31,417 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1730139615 2023-07-24 23:10:31,417 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 23:10:31,429 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=74 2023-07-24 23:10:31,440 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/72b6613191f6b7808d16695767d588b6 2023-07-24 23:10:31,441 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/05ed8c45bb718b32734a1a7aa2821911 2023-07-24 23:10:31,441 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/692833e921139cec2eb1de34ad198063 2023-07-24 23:10:31,441 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f870eace70bec3ba9e6c235e00c9aa66 2023-07-24 23:10:31,441 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a6e51d6b60782c242a9159439a460645 2023-07-24 23:10:31,443 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/72b6613191f6b7808d16695767d588b6/f, FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/72b6613191f6b7808d16695767d588b6/recovered.edits] 2023-07-24 23:10:31,444 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/05ed8c45bb718b32734a1a7aa2821911/f, FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/05ed8c45bb718b32734a1a7aa2821911/recovered.edits] 2023-07-24 23:10:31,445 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a6e51d6b60782c242a9159439a460645/f, FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a6e51d6b60782c242a9159439a460645/recovered.edits] 2023-07-24 23:10:31,445 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/692833e921139cec2eb1de34ad198063/f, FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/692833e921139cec2eb1de34ad198063/recovered.edits] 2023-07-24 23:10:31,446 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f870eace70bec3ba9e6c235e00c9aa66/f, FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f870eace70bec3ba9e6c235e00c9aa66/recovered.edits] 2023-07-24 23:10:31,457 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/72b6613191f6b7808d16695767d588b6/recovered.edits/4.seqid to hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/archive/data/default/Group_testTableMoveTruncateAndDrop/72b6613191f6b7808d16695767d588b6/recovered.edits/4.seqid 2023-07-24 23:10:31,458 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/05ed8c45bb718b32734a1a7aa2821911/recovered.edits/4.seqid to hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/archive/data/default/Group_testTableMoveTruncateAndDrop/05ed8c45bb718b32734a1a7aa2821911/recovered.edits/4.seqid 2023-07-24 23:10:31,458 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/692833e921139cec2eb1de34ad198063/recovered.edits/4.seqid to hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/archive/data/default/Group_testTableMoveTruncateAndDrop/692833e921139cec2eb1de34ad198063/recovered.edits/4.seqid 2023-07-24 23:10:31,458 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a6e51d6b60782c242a9159439a460645/recovered.edits/4.seqid to hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/archive/data/default/Group_testTableMoveTruncateAndDrop/a6e51d6b60782c242a9159439a460645/recovered.edits/4.seqid 2023-07-24 23:10:31,459 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/72b6613191f6b7808d16695767d588b6 2023-07-24 23:10:31,459 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/05ed8c45bb718b32734a1a7aa2821911 2023-07-24 23:10:31,459 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/692833e921139cec2eb1de34ad198063 2023-07-24 23:10:31,459 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a6e51d6b60782c242a9159439a460645 2023-07-24 23:10:31,460 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f870eace70bec3ba9e6c235e00c9aa66/recovered.edits/4.seqid to hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/archive/data/default/Group_testTableMoveTruncateAndDrop/f870eace70bec3ba9e6c235e00c9aa66/recovered.edits/4.seqid 2023-07-24 23:10:31,460 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f870eace70bec3ba9e6c235e00c9aa66 2023-07-24 23:10:31,460 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-24 23:10:31,463 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=74, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-24 23:10:31,469 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-24 23:10:31,472 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-24 23:10:31,473 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=74, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-24 23:10:31,473 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-24 23:10:31,474 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1690240230031.72b6613191f6b7808d16695767d588b6.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690240231474"}]},"ts":"9223372036854775807"} 2023-07-24 23:10:31,474 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690240230031.05ed8c45bb718b32734a1a7aa2821911.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690240231474"}]},"ts":"9223372036854775807"} 2023-07-24 23:10:31,474 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690240230031.f870eace70bec3ba9e6c235e00c9aa66.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690240231474"}]},"ts":"9223372036854775807"} 2023-07-24 23:10:31,474 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690240230031.692833e921139cec2eb1de34ad198063.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690240231474"}]},"ts":"9223372036854775807"} 2023-07-24 23:10:31,474 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690240230031.a6e51d6b60782c242a9159439a460645.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690240231474"}]},"ts":"9223372036854775807"} 2023-07-24 23:10:31,476 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-24 23:10:31,476 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 72b6613191f6b7808d16695767d588b6, NAME => 'Group_testTableMoveTruncateAndDrop,,1690240230031.72b6613191f6b7808d16695767d588b6.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 05ed8c45bb718b32734a1a7aa2821911, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690240230031.05ed8c45bb718b32734a1a7aa2821911.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => f870eace70bec3ba9e6c235e00c9aa66, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690240230031.f870eace70bec3ba9e6c235e00c9aa66.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 692833e921139cec2eb1de34ad198063, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690240230031.692833e921139cec2eb1de34ad198063.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => a6e51d6b60782c242a9159439a460645, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690240230031.a6e51d6b60782c242a9159439a460645.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-24 23:10:31,477 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-24 23:10:31,477 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690240231477"}]},"ts":"9223372036854775807"} 2023-07-24 23:10:31,478 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-24 23:10:31,481 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=74, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-24 23:10:31,482 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=74, state=SUCCESS; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop in 78 msec 2023-07-24 23:10:31,530 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=74 2023-07-24 23:10:31,531 INFO [Listener at localhost/39785] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 74 completed 2023-07-24 23:10:31,532 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1730139615 2023-07-24 23:10:31,532 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:31,537 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:31,537 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:31,539 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 23:10:31,539 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 23:10:31,539 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 23:10:31,540 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 23:10:31,540 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 23:10:31,546 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 23:10:31,551 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:31,552 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1730139615 2023-07-24 23:10:31,552 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-24 23:10:31,554 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 23:10:31,556 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 23:10:31,556 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 23:10:31,556 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 23:10:31,557 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33649, jenkins-hbase4.apache.org:36981] to rsgroup default 2023-07-24 23:10:31,559 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:31,559 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1730139615 2023-07-24 23:10:31,560 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 23:10:31,566 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testTableMoveTruncateAndDrop_1730139615, current retry=0 2023-07-24 23:10:31,566 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33649,1690240221185, jenkins-hbase4.apache.org,36981,1690240220580] are moved back to Group_testTableMoveTruncateAndDrop_1730139615 2023-07-24 23:10:31,566 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testTableMoveTruncateAndDrop_1730139615 => default 2023-07-24 23:10:31,566 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 23:10:31,567 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testTableMoveTruncateAndDrop_1730139615 2023-07-24 23:10:31,570 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:31,571 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 23:10:31,572 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 23:10:31,576 INFO [Listener at localhost/39785] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 23:10:31,576 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 23:10:31,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:31,579 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:31,584 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 23:10:31,586 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 23:10:31,591 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:31,591 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:31,600 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42959] to rsgroup master 2023-07-24 23:10:31,600 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 23:10:31,601 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.CallRunner(144): callId: 149 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34864 deadline: 1690241431600, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. 2023-07-24 23:10:31,602 WARN [Listener at localhost/39785] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 23:10:31,604 INFO [Listener at localhost/39785] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:31,607 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:31,607 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:31,608 INFO [Listener at localhost/39785] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33649, jenkins-hbase4.apache.org:36981, jenkins-hbase4.apache.org:42429, jenkins-hbase4.apache.org:46215], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 23:10:31,609 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 23:10:31,609 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:31,645 INFO [Listener at localhost/39785] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=504 (was 420) Potentially hanging thread: PacketResponder: BP-1477399179-172.31.14.131-1690240214735:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=46215 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x68365814-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_META-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c-prefix:jenkins-hbase4.apache.org,46215,1690240224735 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp20324208-636 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1519090482_17 at /127.0.0.1:48156 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1477399179-172.31.14.131-1690240214735:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:46215 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-649929925_17 at /127.0.0.1:48134 [Receiving block BP-1477399179-172.31.14.131-1690240214735:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-649929925_17 at /127.0.0.1:48096 [Receiving block BP-1477399179-172.31.14.131-1690240214735:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59310@0x1c443cd9-SendThread(127.0.0.1:59310) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp20324208-637 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp20324208-639 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59310@0x1c443cd9-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46215 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=46215 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-167412592_17 at /127.0.0.1:43794 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c-prefix:jenkins-hbase4.apache.org,46215,1690240224735.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x68365814-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x68365814-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RSProcedureDispatcher-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63581179-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x68365814-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59310@0x1c443cd9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/36386179.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=46215 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-3-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:38733 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1086484754_17 at /127.0.0.1:42288 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=46215 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x68365814-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x68365814-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-649929925_17 at /127.0.0.1:54030 [Receiving block BP-1477399179-172.31.14.131-1690240214735:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp20324208-641 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-649929925_17 at /127.0.0.1:42184 [Receiving block BP-1477399179-172.31.14.131-1690240214735:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:46215Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp20324208-638 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: hconnection-0x63581179-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:46215-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp20324208-635-acceptor-0@53f7fd54-ServerConnector@143940fe{HTTP/1.1, (http/1.1)}{0.0.0.0:33289} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=46215 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp20324208-640 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-101668ec-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp20324208-634 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1666930596.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46215 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-649929925_17 at /127.0.0.1:54072 [Receiving block BP-1477399179-172.31.14.131-1690240214735:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1477399179-172.31.14.131-1690240214735:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1477399179-172.31.14.131-1690240214735:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (360695303) connection to localhost/127.0.0.1:38733 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=46215 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46215 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-649929925_17 at /127.0.0.1:42228 [Receiving block BP-1477399179-172.31.14.131-1690240214735:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46215 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1477399179-172.31.14.131-1690240214735:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1477399179-172.31.14.131-1690240214735:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=805 (was 673) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=413 (was 413), ProcessCount=177 (was 177), AvailableMemoryMB=6229 (was 6659) 2023-07-24 23:10:31,646 WARN [Listener at localhost/39785] hbase.ResourceChecker(130): Thread=504 is superior to 500 2023-07-24 23:10:31,667 INFO [Listener at localhost/39785] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=504, OpenFileDescriptor=805, MaxFileDescriptor=60000, SystemLoadAverage=413, ProcessCount=177, AvailableMemoryMB=6227 2023-07-24 23:10:31,667 WARN [Listener at localhost/39785] hbase.ResourceChecker(130): Thread=504 is superior to 500 2023-07-24 23:10:31,668 INFO [Listener at localhost/39785] rsgroup.TestRSGroupsBase(132): testValidGroupNames 2023-07-24 23:10:31,676 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:31,677 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:31,679 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 23:10:31,679 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 23:10:31,679 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 23:10:31,680 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 23:10:31,680 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 23:10:31,683 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 23:10:31,688 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:31,689 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 23:10:31,691 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 23:10:31,695 INFO [Listener at localhost/39785] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 23:10:31,696 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 23:10:31,698 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:31,698 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:31,700 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 23:10:31,702 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 23:10:31,705 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:31,706 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:31,708 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42959] to rsgroup master 2023-07-24 23:10:31,708 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 23:10:31,708 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.CallRunner(144): callId: 177 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34864 deadline: 1690241431708, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. 2023-07-24 23:10:31,709 WARN [Listener at localhost/39785] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 23:10:31,711 INFO [Listener at localhost/39785] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:31,712 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:31,712 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:31,712 INFO [Listener at localhost/39785] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33649, jenkins-hbase4.apache.org:36981, jenkins-hbase4.apache.org:42429, jenkins-hbase4.apache.org:46215], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 23:10:31,713 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 23:10:31,713 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:31,714 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo* 2023-07-24 23:10:31,714 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 23:10:31,714 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.CallRunner(144): callId: 183 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:34864 deadline: 1690241431714, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-24 23:10:31,715 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo@ 2023-07-24 23:10:31,715 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 23:10:31,715 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.CallRunner(144): callId: 185 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:34864 deadline: 1690241431715, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-24 23:10:31,716 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup - 2023-07-24 23:10:31,717 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 23:10:31,717 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.CallRunner(144): callId: 187 service: MasterService methodName: ExecMasterService size: 80 connection: 172.31.14.131:34864 deadline: 1690241431716, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-24 23:10:31,718 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo_123 2023-07-24 23:10:31,721 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/foo_123 2023-07-24 23:10:31,722 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:31,723 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:31,723 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 23:10:31,725 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 23:10:31,733 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:31,733 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:31,739 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:31,739 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:31,740 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 23:10:31,740 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 23:10:31,740 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 23:10:31,741 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 23:10:31,741 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 23:10:31,742 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup foo_123 2023-07-24 23:10:31,746 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:31,747 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:31,747 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-24 23:10:31,749 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 23:10:31,750 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 23:10:31,750 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 23:10:31,750 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 23:10:31,752 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 23:10:31,752 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 23:10:31,753 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 23:10:31,757 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:31,758 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 23:10:31,763 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 23:10:31,771 INFO [Listener at localhost/39785] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 23:10:31,772 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 23:10:31,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:31,776 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:31,780 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 23:10:31,782 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 23:10:31,786 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:31,786 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:31,792 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42959] to rsgroup master 2023-07-24 23:10:31,793 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 23:10:31,793 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.CallRunner(144): callId: 221 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34864 deadline: 1690241431792, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. 2023-07-24 23:10:31,793 WARN [Listener at localhost/39785] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 23:10:31,795 INFO [Listener at localhost/39785] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:31,796 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:31,797 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:31,797 INFO [Listener at localhost/39785] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33649, jenkins-hbase4.apache.org:36981, jenkins-hbase4.apache.org:42429, jenkins-hbase4.apache.org:46215], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 23:10:31,798 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 23:10:31,798 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:31,820 INFO [Listener at localhost/39785] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=507 (was 504) Potentially hanging thread: hconnection-0x63581179-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63581179-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63581179-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=805 (was 805), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=413 (was 413), ProcessCount=177 (was 177), AvailableMemoryMB=6225 (was 6227) 2023-07-24 23:10:31,820 WARN [Listener at localhost/39785] hbase.ResourceChecker(130): Thread=507 is superior to 500 2023-07-24 23:10:31,839 INFO [Listener at localhost/39785] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=507, OpenFileDescriptor=805, MaxFileDescriptor=60000, SystemLoadAverage=413, ProcessCount=177, AvailableMemoryMB=6223 2023-07-24 23:10:31,840 WARN [Listener at localhost/39785] hbase.ResourceChecker(130): Thread=507 is superior to 500 2023-07-24 23:10:31,840 INFO [Listener at localhost/39785] rsgroup.TestRSGroupsBase(132): testFailRemoveGroup 2023-07-24 23:10:31,845 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:31,845 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:31,846 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 23:10:31,846 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 23:10:31,846 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 23:10:31,847 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 23:10:31,847 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 23:10:31,848 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 23:10:31,853 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:31,853 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 23:10:31,860 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 23:10:31,865 INFO [Listener at localhost/39785] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 23:10:31,866 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 23:10:31,869 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:31,869 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:31,871 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 23:10:31,873 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 23:10:31,877 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:31,877 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:31,879 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42959] to rsgroup master 2023-07-24 23:10:31,879 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 23:10:31,880 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.CallRunner(144): callId: 249 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34864 deadline: 1690241431879, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. 2023-07-24 23:10:31,880 WARN [Listener at localhost/39785] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 23:10:31,882 INFO [Listener at localhost/39785] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:31,883 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:31,883 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:31,883 INFO [Listener at localhost/39785] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33649, jenkins-hbase4.apache.org:36981, jenkins-hbase4.apache.org:42429, jenkins-hbase4.apache.org:46215], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 23:10:31,884 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 23:10:31,884 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:31,885 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:31,886 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:31,887 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 23:10:31,887 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:31,888 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup bar 2023-07-24 23:10:31,890 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:31,891 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-24 23:10:31,893 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:31,893 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 23:10:31,896 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 23:10:31,899 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:31,899 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:31,902 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33649, jenkins-hbase4.apache.org:36981, jenkins-hbase4.apache.org:42429] to rsgroup bar 2023-07-24 23:10:31,905 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:31,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-24 23:10:31,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:31,907 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 23:10:31,908 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(238): Moving server region 386ba32f0c3b0408cdca5a4ed5ced8e4, which do not belong to RSGroup bar 2023-07-24 23:10:31,909 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] procedure2.ProcedureExecutor(1029): Stored pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=386ba32f0c3b0408cdca5a4ed5ced8e4, REOPEN/MOVE 2023-07-24 23:10:31,910 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(238): Moving server region c59756cef5ea3b9231917a64964f5e23, which do not belong to RSGroup bar 2023-07-24 23:10:31,911 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=386ba32f0c3b0408cdca5a4ed5ced8e4, REOPEN/MOVE 2023-07-24 23:10:31,912 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] procedure2.ProcedureExecutor(1029): Stored pid=76, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=c59756cef5ea3b9231917a64964f5e23, REOPEN/MOVE 2023-07-24 23:10:31,912 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=386ba32f0c3b0408cdca5a4ed5ced8e4, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42429,1690240220974 2023-07-24 23:10:31,913 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=76, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=c59756cef5ea3b9231917a64964f5e23, REOPEN/MOVE 2023-07-24 23:10:31,913 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690240223833.386ba32f0c3b0408cdca5a4ed5ced8e4.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690240231912"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240231912"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240231912"}]},"ts":"1690240231912"} 2023-07-24 23:10:31,912 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group default, current retry=0 2023-07-24 23:10:31,914 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=76 updating hbase:meta row=c59756cef5ea3b9231917a64964f5e23, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42429,1690240220974 2023-07-24 23:10:31,914 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690240223673.c59756cef5ea3b9231917a64964f5e23.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690240231914"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240231914"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240231914"}]},"ts":"1690240231914"} 2023-07-24 23:10:31,915 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=77, ppid=75, state=RUNNABLE; CloseRegionProcedure 386ba32f0c3b0408cdca5a4ed5ced8e4, server=jenkins-hbase4.apache.org,42429,1690240220974}] 2023-07-24 23:10:31,916 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=78, ppid=76, state=RUNNABLE; CloseRegionProcedure c59756cef5ea3b9231917a64964f5e23, server=jenkins-hbase4.apache.org,42429,1690240220974}] 2023-07-24 23:10:32,069 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close c59756cef5ea3b9231917a64964f5e23 2023-07-24 23:10:32,070 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c59756cef5ea3b9231917a64964f5e23, disabling compactions & flushes 2023-07-24 23:10:32,070 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690240223673.c59756cef5ea3b9231917a64964f5e23. 2023-07-24 23:10:32,070 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690240223673.c59756cef5ea3b9231917a64964f5e23. 2023-07-24 23:10:32,070 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690240223673.c59756cef5ea3b9231917a64964f5e23. after waiting 0 ms 2023-07-24 23:10:32,070 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690240223673.c59756cef5ea3b9231917a64964f5e23. 2023-07-24 23:10:32,070 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing c59756cef5ea3b9231917a64964f5e23 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-24 23:10:32,100 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/namespace/c59756cef5ea3b9231917a64964f5e23/.tmp/info/e98ee0d65ea04bc8aba8a39261ff2c07 2023-07-24 23:10:32,113 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/namespace/c59756cef5ea3b9231917a64964f5e23/.tmp/info/e98ee0d65ea04bc8aba8a39261ff2c07 as hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/namespace/c59756cef5ea3b9231917a64964f5e23/info/e98ee0d65ea04bc8aba8a39261ff2c07 2023-07-24 23:10:32,125 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/namespace/c59756cef5ea3b9231917a64964f5e23/info/e98ee0d65ea04bc8aba8a39261ff2c07, entries=2, sequenceid=6, filesize=4.8 K 2023-07-24 23:10:32,127 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for c59756cef5ea3b9231917a64964f5e23 in 57ms, sequenceid=6, compaction requested=false 2023-07-24 23:10:32,134 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/namespace/c59756cef5ea3b9231917a64964f5e23/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-24 23:10:32,135 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690240223673.c59756cef5ea3b9231917a64964f5e23. 2023-07-24 23:10:32,135 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c59756cef5ea3b9231917a64964f5e23: 2023-07-24 23:10:32,135 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding c59756cef5ea3b9231917a64964f5e23 move to jenkins-hbase4.apache.org,46215,1690240224735 record at close sequenceid=6 2023-07-24 23:10:32,137 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed c59756cef5ea3b9231917a64964f5e23 2023-07-24 23:10:32,137 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 386ba32f0c3b0408cdca5a4ed5ced8e4 2023-07-24 23:10:32,138 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 386ba32f0c3b0408cdca5a4ed5ced8e4, disabling compactions & flushes 2023-07-24 23:10:32,138 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690240223833.386ba32f0c3b0408cdca5a4ed5ced8e4. 2023-07-24 23:10:32,138 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690240223833.386ba32f0c3b0408cdca5a4ed5ced8e4. 2023-07-24 23:10:32,138 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690240223833.386ba32f0c3b0408cdca5a4ed5ced8e4. after waiting 0 ms 2023-07-24 23:10:32,138 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690240223833.386ba32f0c3b0408cdca5a4ed5ced8e4. 2023-07-24 23:10:32,138 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 386ba32f0c3b0408cdca5a4ed5ced8e4 1/1 column families, dataSize=6.42 KB heapSize=10.48 KB 2023-07-24 23:10:32,138 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=76 updating hbase:meta row=c59756cef5ea3b9231917a64964f5e23, regionState=CLOSED 2023-07-24 23:10:32,139 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1690240223673.c59756cef5ea3b9231917a64964f5e23.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690240232138"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240232138"}]},"ts":"1690240232138"} 2023-07-24 23:10:32,143 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=78, resume processing ppid=76 2023-07-24 23:10:32,143 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=78, ppid=76, state=SUCCESS; CloseRegionProcedure c59756cef5ea3b9231917a64964f5e23, server=jenkins-hbase4.apache.org,42429,1690240220974 in 224 msec 2023-07-24 23:10:32,144 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=76, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=c59756cef5ea3b9231917a64964f5e23, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,46215,1690240224735; forceNewPlan=false, retain=false 2023-07-24 23:10:32,158 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.42 KB at sequenceid=26 (bloomFilter=true), to=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/rsgroup/386ba32f0c3b0408cdca5a4ed5ced8e4/.tmp/m/8c880ec8273e4a18b62a3ba6f70e67b7 2023-07-24 23:10:32,166 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 8c880ec8273e4a18b62a3ba6f70e67b7 2023-07-24 23:10:32,167 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/rsgroup/386ba32f0c3b0408cdca5a4ed5ced8e4/.tmp/m/8c880ec8273e4a18b62a3ba6f70e67b7 as hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/rsgroup/386ba32f0c3b0408cdca5a4ed5ced8e4/m/8c880ec8273e4a18b62a3ba6f70e67b7 2023-07-24 23:10:32,175 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 8c880ec8273e4a18b62a3ba6f70e67b7 2023-07-24 23:10:32,175 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/rsgroup/386ba32f0c3b0408cdca5a4ed5ced8e4/m/8c880ec8273e4a18b62a3ba6f70e67b7, entries=9, sequenceid=26, filesize=5.5 K 2023-07-24 23:10:32,176 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.42 KB/6569, heapSize ~10.47 KB/10720, currentSize=0 B/0 for 386ba32f0c3b0408cdca5a4ed5ced8e4 in 38ms, sequenceid=26, compaction requested=false 2023-07-24 23:10:32,184 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/rsgroup/386ba32f0c3b0408cdca5a4ed5ced8e4/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-24 23:10:32,185 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 23:10:32,186 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690240223833.386ba32f0c3b0408cdca5a4ed5ced8e4. 2023-07-24 23:10:32,186 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 386ba32f0c3b0408cdca5a4ed5ced8e4: 2023-07-24 23:10:32,186 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 386ba32f0c3b0408cdca5a4ed5ced8e4 move to jenkins-hbase4.apache.org,46215,1690240224735 record at close sequenceid=26 2023-07-24 23:10:32,188 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 386ba32f0c3b0408cdca5a4ed5ced8e4 2023-07-24 23:10:32,189 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=386ba32f0c3b0408cdca5a4ed5ced8e4, regionState=CLOSED 2023-07-24 23:10:32,189 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1690240223833.386ba32f0c3b0408cdca5a4ed5ced8e4.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690240232189"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240232189"}]},"ts":"1690240232189"} 2023-07-24 23:10:32,192 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=77, resume processing ppid=75 2023-07-24 23:10:32,192 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=77, ppid=75, state=SUCCESS; CloseRegionProcedure 386ba32f0c3b0408cdca5a4ed5ced8e4, server=jenkins-hbase4.apache.org,42429,1690240220974 in 275 msec 2023-07-24 23:10:32,193 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=386ba32f0c3b0408cdca5a4ed5ced8e4, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,46215,1690240224735; forceNewPlan=false, retain=false 2023-07-24 23:10:32,193 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=76 updating hbase:meta row=c59756cef5ea3b9231917a64964f5e23, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46215,1690240224735 2023-07-24 23:10:32,194 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690240223673.c59756cef5ea3b9231917a64964f5e23.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690240232193"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240232193"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240232193"}]},"ts":"1690240232193"} 2023-07-24 23:10:32,194 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=386ba32f0c3b0408cdca5a4ed5ced8e4, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46215,1690240224735 2023-07-24 23:10:32,195 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690240223833.386ba32f0c3b0408cdca5a4ed5ced8e4.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690240232194"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240232194"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240232194"}]},"ts":"1690240232194"} 2023-07-24 23:10:32,198 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=79, ppid=76, state=RUNNABLE; OpenRegionProcedure c59756cef5ea3b9231917a64964f5e23, server=jenkins-hbase4.apache.org,46215,1690240224735}] 2023-07-24 23:10:32,199 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=80, ppid=75, state=RUNNABLE; OpenRegionProcedure 386ba32f0c3b0408cdca5a4ed5ced8e4, server=jenkins-hbase4.apache.org,46215,1690240224735}] 2023-07-24 23:10:32,357 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1690240223673.c59756cef5ea3b9231917a64964f5e23. 2023-07-24 23:10:32,358 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c59756cef5ea3b9231917a64964f5e23, NAME => 'hbase:namespace,,1690240223673.c59756cef5ea3b9231917a64964f5e23.', STARTKEY => '', ENDKEY => ''} 2023-07-24 23:10:32,358 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace c59756cef5ea3b9231917a64964f5e23 2023-07-24 23:10:32,358 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690240223673.c59756cef5ea3b9231917a64964f5e23.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:32,358 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c59756cef5ea3b9231917a64964f5e23 2023-07-24 23:10:32,358 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c59756cef5ea3b9231917a64964f5e23 2023-07-24 23:10:32,361 INFO [StoreOpener-c59756cef5ea3b9231917a64964f5e23-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region c59756cef5ea3b9231917a64964f5e23 2023-07-24 23:10:32,362 DEBUG [StoreOpener-c59756cef5ea3b9231917a64964f5e23-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/namespace/c59756cef5ea3b9231917a64964f5e23/info 2023-07-24 23:10:32,362 DEBUG [StoreOpener-c59756cef5ea3b9231917a64964f5e23-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/namespace/c59756cef5ea3b9231917a64964f5e23/info 2023-07-24 23:10:32,363 INFO [StoreOpener-c59756cef5ea3b9231917a64964f5e23-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c59756cef5ea3b9231917a64964f5e23 columnFamilyName info 2023-07-24 23:10:32,372 DEBUG [StoreOpener-c59756cef5ea3b9231917a64964f5e23-1] regionserver.HStore(539): loaded hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/namespace/c59756cef5ea3b9231917a64964f5e23/info/e98ee0d65ea04bc8aba8a39261ff2c07 2023-07-24 23:10:32,372 INFO [StoreOpener-c59756cef5ea3b9231917a64964f5e23-1] regionserver.HStore(310): Store=c59756cef5ea3b9231917a64964f5e23/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:32,373 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/namespace/c59756cef5ea3b9231917a64964f5e23 2023-07-24 23:10:32,375 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/namespace/c59756cef5ea3b9231917a64964f5e23 2023-07-24 23:10:32,379 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c59756cef5ea3b9231917a64964f5e23 2023-07-24 23:10:32,380 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c59756cef5ea3b9231917a64964f5e23; next sequenceid=10; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11954388800, jitterRate=0.11333921551704407}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 23:10:32,380 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c59756cef5ea3b9231917a64964f5e23: 2023-07-24 23:10:32,381 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1690240223673.c59756cef5ea3b9231917a64964f5e23., pid=79, masterSystemTime=1690240232353 2023-07-24 23:10:32,383 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1690240223673.c59756cef5ea3b9231917a64964f5e23. 2023-07-24 23:10:32,383 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1690240223673.c59756cef5ea3b9231917a64964f5e23. 2023-07-24 23:10:32,383 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1690240223833.386ba32f0c3b0408cdca5a4ed5ced8e4. 2023-07-24 23:10:32,383 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 386ba32f0c3b0408cdca5a4ed5ced8e4, NAME => 'hbase:rsgroup,,1690240223833.386ba32f0c3b0408cdca5a4ed5ced8e4.', STARTKEY => '', ENDKEY => ''} 2023-07-24 23:10:32,383 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 23:10:32,384 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1690240223833.386ba32f0c3b0408cdca5a4ed5ced8e4. service=MultiRowMutationService 2023-07-24 23:10:32,384 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=76 updating hbase:meta row=c59756cef5ea3b9231917a64964f5e23, regionState=OPEN, openSeqNum=10, regionLocation=jenkins-hbase4.apache.org,46215,1690240224735 2023-07-24 23:10:32,384 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-24 23:10:32,384 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 386ba32f0c3b0408cdca5a4ed5ced8e4 2023-07-24 23:10:32,384 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690240223833.386ba32f0c3b0408cdca5a4ed5ced8e4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:32,384 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1690240223673.c59756cef5ea3b9231917a64964f5e23.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690240232384"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690240232384"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690240232384"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690240232384"}]},"ts":"1690240232384"} 2023-07-24 23:10:32,384 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 386ba32f0c3b0408cdca5a4ed5ced8e4 2023-07-24 23:10:32,385 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 386ba32f0c3b0408cdca5a4ed5ced8e4 2023-07-24 23:10:32,389 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=79, resume processing ppid=76 2023-07-24 23:10:32,389 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=79, ppid=76, state=SUCCESS; OpenRegionProcedure c59756cef5ea3b9231917a64964f5e23, server=jenkins-hbase4.apache.org,46215,1690240224735 in 191 msec 2023-07-24 23:10:32,390 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=76, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=c59756cef5ea3b9231917a64964f5e23, REOPEN/MOVE in 478 msec 2023-07-24 23:10:32,390 INFO [StoreOpener-386ba32f0c3b0408cdca5a4ed5ced8e4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 386ba32f0c3b0408cdca5a4ed5ced8e4 2023-07-24 23:10:32,392 DEBUG [StoreOpener-386ba32f0c3b0408cdca5a4ed5ced8e4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/rsgroup/386ba32f0c3b0408cdca5a4ed5ced8e4/m 2023-07-24 23:10:32,392 DEBUG [StoreOpener-386ba32f0c3b0408cdca5a4ed5ced8e4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/rsgroup/386ba32f0c3b0408cdca5a4ed5ced8e4/m 2023-07-24 23:10:32,392 INFO [StoreOpener-386ba32f0c3b0408cdca5a4ed5ced8e4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 386ba32f0c3b0408cdca5a4ed5ced8e4 columnFamilyName m 2023-07-24 23:10:32,400 INFO [StoreFileOpener-m-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 8c880ec8273e4a18b62a3ba6f70e67b7 2023-07-24 23:10:32,400 DEBUG [StoreOpener-386ba32f0c3b0408cdca5a4ed5ced8e4-1] regionserver.HStore(539): loaded hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/rsgroup/386ba32f0c3b0408cdca5a4ed5ced8e4/m/8c880ec8273e4a18b62a3ba6f70e67b7 2023-07-24 23:10:32,400 INFO [StoreOpener-386ba32f0c3b0408cdca5a4ed5ced8e4-1] regionserver.HStore(310): Store=386ba32f0c3b0408cdca5a4ed5ced8e4/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:32,401 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/rsgroup/386ba32f0c3b0408cdca5a4ed5ced8e4 2023-07-24 23:10:32,402 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/rsgroup/386ba32f0c3b0408cdca5a4ed5ced8e4 2023-07-24 23:10:32,406 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 386ba32f0c3b0408cdca5a4ed5ced8e4 2023-07-24 23:10:32,407 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 386ba32f0c3b0408cdca5a4ed5ced8e4; next sequenceid=30; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@3277631b, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 23:10:32,407 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 386ba32f0c3b0408cdca5a4ed5ced8e4: 2023-07-24 23:10:32,408 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1690240223833.386ba32f0c3b0408cdca5a4ed5ced8e4., pid=80, masterSystemTime=1690240232353 2023-07-24 23:10:32,410 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1690240223833.386ba32f0c3b0408cdca5a4ed5ced8e4. 2023-07-24 23:10:32,410 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1690240223833.386ba32f0c3b0408cdca5a4ed5ced8e4. 2023-07-24 23:10:32,411 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=386ba32f0c3b0408cdca5a4ed5ced8e4, regionState=OPEN, openSeqNum=30, regionLocation=jenkins-hbase4.apache.org,46215,1690240224735 2023-07-24 23:10:32,411 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1690240223833.386ba32f0c3b0408cdca5a4ed5ced8e4.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690240232411"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690240232411"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690240232411"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690240232411"}]},"ts":"1690240232411"} 2023-07-24 23:10:32,415 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=80, resume processing ppid=75 2023-07-24 23:10:32,415 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=80, ppid=75, state=SUCCESS; OpenRegionProcedure 386ba32f0c3b0408cdca5a4ed5ced8e4, server=jenkins-hbase4.apache.org,46215,1690240224735 in 214 msec 2023-07-24 23:10:32,416 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=75, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=386ba32f0c3b0408cdca5a4ed5ced8e4, REOPEN/MOVE in 506 msec 2023-07-24 23:10:32,913 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] procedure.ProcedureSyncWait(216): waitFor pid=75 2023-07-24 23:10:32,914 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33649,1690240221185, jenkins-hbase4.apache.org,36981,1690240220580, jenkins-hbase4.apache.org,42429,1690240220974] are moved back to default 2023-07-24 23:10:32,914 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(438): Move servers done: default => bar 2023-07-24 23:10:32,914 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 23:10:32,915 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=42429] ipc.CallRunner(144): callId: 11 service: ClientService methodName: Scan size: 136 connection: 172.31.14.131:54536 deadline: 1690240292914, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=46215 startCode=1690240224735. As of locationSeqNum=26. 2023-07-24 23:10:33,019 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=33649] ipc.CallRunner(144): callId: 12 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:48086 deadline: 1690240293019, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=46215 startCode=1690240224735. As of locationSeqNum=14. 2023-07-24 23:10:33,121 DEBUG [hconnection-0x63581179-shared-pool-4] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 23:10:33,124 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36310, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 23:10:33,133 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:33,134 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:33,136 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-24 23:10:33,136 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:33,138 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 23:10:33,139 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] procedure2.ProcedureExecutor(1029): Stored pid=81, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testFailRemoveGroup 2023-07-24 23:10:33,141 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 23:10:33,142 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testFailRemoveGroup" procId is: 81 2023-07-24 23:10:33,142 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=42429] ipc.CallRunner(144): callId: 186 service: ClientService methodName: ExecService size: 528 connection: 172.31.14.131:54528 deadline: 1690240293142, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=46215 startCode=1690240224735. As of locationSeqNum=26. 2023-07-24 23:10:33,143 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-24 23:10:33,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-24 23:10:33,249 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:33,249 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-24 23:10:33,250 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:33,251 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 23:10:33,254 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 23:10:33,255 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testFailRemoveGroup/e4181d58172330aaeec03c5497896cc1 2023-07-24 23:10:33,256 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testFailRemoveGroup/e4181d58172330aaeec03c5497896cc1 empty. 2023-07-24 23:10:33,257 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testFailRemoveGroup/e4181d58172330aaeec03c5497896cc1 2023-07-24 23:10:33,257 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-24 23:10:33,286 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testFailRemoveGroup/.tabledesc/.tableinfo.0000000001 2023-07-24 23:10:33,287 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => e4181d58172330aaeec03c5497896cc1, NAME => 'Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp 2023-07-24 23:10:33,305 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:33,305 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1604): Closing e4181d58172330aaeec03c5497896cc1, disabling compactions & flushes 2023-07-24 23:10:33,305 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1. 2023-07-24 23:10:33,305 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1. 2023-07-24 23:10:33,305 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1. after waiting 0 ms 2023-07-24 23:10:33,305 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1. 2023-07-24 23:10:33,305 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1. 2023-07-24 23:10:33,305 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1558): Region close journal for e4181d58172330aaeec03c5497896cc1: 2023-07-24 23:10:33,307 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 23:10:33,309 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690240233308"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240233308"}]},"ts":"1690240233308"} 2023-07-24 23:10:33,311 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 23:10:33,312 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 23:10:33,312 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690240233312"}]},"ts":"1690240233312"} 2023-07-24 23:10:33,313 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLING in hbase:meta 2023-07-24 23:10:33,321 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=e4181d58172330aaeec03c5497896cc1, ASSIGN}] 2023-07-24 23:10:33,323 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=e4181d58172330aaeec03c5497896cc1, ASSIGN 2023-07-24 23:10:33,324 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=e4181d58172330aaeec03c5497896cc1, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46215,1690240224735; forceNewPlan=false, retain=false 2023-07-24 23:10:33,446 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-24 23:10:33,476 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=82 updating hbase:meta row=e4181d58172330aaeec03c5497896cc1, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46215,1690240224735 2023-07-24 23:10:33,476 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690240233476"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240233476"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240233476"}]},"ts":"1690240233476"} 2023-07-24 23:10:33,478 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=83, ppid=82, state=RUNNABLE; OpenRegionProcedure e4181d58172330aaeec03c5497896cc1, server=jenkins-hbase4.apache.org,46215,1690240224735}] 2023-07-24 23:10:33,635 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1. 2023-07-24 23:10:33,635 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e4181d58172330aaeec03c5497896cc1, NAME => 'Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1.', STARTKEY => '', ENDKEY => ''} 2023-07-24 23:10:33,636 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup e4181d58172330aaeec03c5497896cc1 2023-07-24 23:10:33,636 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:33,636 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e4181d58172330aaeec03c5497896cc1 2023-07-24 23:10:33,636 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e4181d58172330aaeec03c5497896cc1 2023-07-24 23:10:33,639 INFO [StoreOpener-e4181d58172330aaeec03c5497896cc1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region e4181d58172330aaeec03c5497896cc1 2023-07-24 23:10:33,640 DEBUG [StoreOpener-e4181d58172330aaeec03c5497896cc1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testFailRemoveGroup/e4181d58172330aaeec03c5497896cc1/f 2023-07-24 23:10:33,640 DEBUG [StoreOpener-e4181d58172330aaeec03c5497896cc1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testFailRemoveGroup/e4181d58172330aaeec03c5497896cc1/f 2023-07-24 23:10:33,641 INFO [StoreOpener-e4181d58172330aaeec03c5497896cc1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e4181d58172330aaeec03c5497896cc1 columnFamilyName f 2023-07-24 23:10:33,642 INFO [StoreOpener-e4181d58172330aaeec03c5497896cc1-1] regionserver.HStore(310): Store=e4181d58172330aaeec03c5497896cc1/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:33,643 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testFailRemoveGroup/e4181d58172330aaeec03c5497896cc1 2023-07-24 23:10:33,643 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testFailRemoveGroup/e4181d58172330aaeec03c5497896cc1 2023-07-24 23:10:33,646 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e4181d58172330aaeec03c5497896cc1 2023-07-24 23:10:33,649 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testFailRemoveGroup/e4181d58172330aaeec03c5497896cc1/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 23:10:33,650 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e4181d58172330aaeec03c5497896cc1; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11399393120, jitterRate=0.061651214957237244}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 23:10:33,650 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e4181d58172330aaeec03c5497896cc1: 2023-07-24 23:10:33,653 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1., pid=83, masterSystemTime=1690240233631 2023-07-24 23:10:33,655 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1. 2023-07-24 23:10:33,656 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1. 2023-07-24 23:10:33,656 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=82 updating hbase:meta row=e4181d58172330aaeec03c5497896cc1, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46215,1690240224735 2023-07-24 23:10:33,656 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690240233656"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690240233656"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690240233656"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690240233656"}]},"ts":"1690240233656"} 2023-07-24 23:10:33,660 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=83, resume processing ppid=82 2023-07-24 23:10:33,660 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=83, ppid=82, state=SUCCESS; OpenRegionProcedure e4181d58172330aaeec03c5497896cc1, server=jenkins-hbase4.apache.org,46215,1690240224735 in 180 msec 2023-07-24 23:10:33,663 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=82, resume processing ppid=81 2023-07-24 23:10:33,663 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=82, ppid=81, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=e4181d58172330aaeec03c5497896cc1, ASSIGN in 339 msec 2023-07-24 23:10:33,664 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 23:10:33,664 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690240233664"}]},"ts":"1690240233664"} 2023-07-24 23:10:33,666 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLED in hbase:meta 2023-07-24 23:10:33,668 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 23:10:33,670 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=81, state=SUCCESS; CreateTableProcedure table=Group_testFailRemoveGroup in 530 msec 2023-07-24 23:10:33,747 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-24 23:10:33,748 INFO [Listener at localhost/39785] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testFailRemoveGroup, procId: 81 completed 2023-07-24 23:10:33,748 DEBUG [Listener at localhost/39785] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testFailRemoveGroup get assigned. Timeout = 60000ms 2023-07-24 23:10:33,748 INFO [Listener at localhost/39785] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:33,755 INFO [Listener at localhost/39785] hbase.HBaseTestingUtility(3484): All regions for table Group_testFailRemoveGroup assigned to meta. Checking AM states. 2023-07-24 23:10:33,755 INFO [Listener at localhost/39785] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:33,755 INFO [Listener at localhost/39785] hbase.HBaseTestingUtility(3504): All regions for table Group_testFailRemoveGroup assigned. 2023-07-24 23:10:33,757 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup bar 2023-07-24 23:10:33,760 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:33,760 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-24 23:10:33,761 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:33,761 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 23:10:33,763 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup bar 2023-07-24 23:10:33,763 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(345): Moving region e4181d58172330aaeec03c5497896cc1 to RSGroup bar 2023-07-24 23:10:33,763 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 23:10:33,763 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 23:10:33,763 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 23:10:33,763 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 23:10:33,763 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-24 23:10:33,763 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 23:10:33,764 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] procedure2.ProcedureExecutor(1029): Stored pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=e4181d58172330aaeec03c5497896cc1, REOPEN/MOVE 2023-07-24 23:10:33,764 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group bar, current retry=0 2023-07-24 23:10:33,765 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=e4181d58172330aaeec03c5497896cc1, REOPEN/MOVE 2023-07-24 23:10:33,766 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=e4181d58172330aaeec03c5497896cc1, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46215,1690240224735 2023-07-24 23:10:33,766 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690240233766"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240233766"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240233766"}]},"ts":"1690240233766"} 2023-07-24 23:10:33,768 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=85, ppid=84, state=RUNNABLE; CloseRegionProcedure e4181d58172330aaeec03c5497896cc1, server=jenkins-hbase4.apache.org,46215,1690240224735}] 2023-07-24 23:10:33,921 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e4181d58172330aaeec03c5497896cc1 2023-07-24 23:10:33,922 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e4181d58172330aaeec03c5497896cc1, disabling compactions & flushes 2023-07-24 23:10:33,922 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1. 2023-07-24 23:10:33,922 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1. 2023-07-24 23:10:33,922 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1. after waiting 0 ms 2023-07-24 23:10:33,922 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1. 2023-07-24 23:10:33,928 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testFailRemoveGroup/e4181d58172330aaeec03c5497896cc1/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 23:10:33,928 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1. 2023-07-24 23:10:33,928 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e4181d58172330aaeec03c5497896cc1: 2023-07-24 23:10:33,928 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding e4181d58172330aaeec03c5497896cc1 move to jenkins-hbase4.apache.org,36981,1690240220580 record at close sequenceid=2 2023-07-24 23:10:33,931 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e4181d58172330aaeec03c5497896cc1 2023-07-24 23:10:33,931 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=e4181d58172330aaeec03c5497896cc1, regionState=CLOSED 2023-07-24 23:10:33,932 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690240233931"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240233931"}]},"ts":"1690240233931"} 2023-07-24 23:10:33,935 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=85, resume processing ppid=84 2023-07-24 23:10:33,935 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=85, ppid=84, state=SUCCESS; CloseRegionProcedure e4181d58172330aaeec03c5497896cc1, server=jenkins-hbase4.apache.org,46215,1690240224735 in 165 msec 2023-07-24 23:10:33,936 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=e4181d58172330aaeec03c5497896cc1, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,36981,1690240220580; forceNewPlan=false, retain=false 2023-07-24 23:10:34,087 INFO [jenkins-hbase4:42959] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 23:10:34,087 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=e4181d58172330aaeec03c5497896cc1, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36981,1690240220580 2023-07-24 23:10:34,087 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690240234087"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240234087"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240234087"}]},"ts":"1690240234087"} 2023-07-24 23:10:34,089 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=86, ppid=84, state=RUNNABLE; OpenRegionProcedure e4181d58172330aaeec03c5497896cc1, server=jenkins-hbase4.apache.org,36981,1690240220580}] 2023-07-24 23:10:34,246 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1. 2023-07-24 23:10:34,246 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e4181d58172330aaeec03c5497896cc1, NAME => 'Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1.', STARTKEY => '', ENDKEY => ''} 2023-07-24 23:10:34,246 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup e4181d58172330aaeec03c5497896cc1 2023-07-24 23:10:34,246 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:34,246 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e4181d58172330aaeec03c5497896cc1 2023-07-24 23:10:34,246 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e4181d58172330aaeec03c5497896cc1 2023-07-24 23:10:34,248 INFO [StoreOpener-e4181d58172330aaeec03c5497896cc1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region e4181d58172330aaeec03c5497896cc1 2023-07-24 23:10:34,249 DEBUG [StoreOpener-e4181d58172330aaeec03c5497896cc1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testFailRemoveGroup/e4181d58172330aaeec03c5497896cc1/f 2023-07-24 23:10:34,249 DEBUG [StoreOpener-e4181d58172330aaeec03c5497896cc1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testFailRemoveGroup/e4181d58172330aaeec03c5497896cc1/f 2023-07-24 23:10:34,250 INFO [StoreOpener-e4181d58172330aaeec03c5497896cc1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e4181d58172330aaeec03c5497896cc1 columnFamilyName f 2023-07-24 23:10:34,251 INFO [StoreOpener-e4181d58172330aaeec03c5497896cc1-1] regionserver.HStore(310): Store=e4181d58172330aaeec03c5497896cc1/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:34,252 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testFailRemoveGroup/e4181d58172330aaeec03c5497896cc1 2023-07-24 23:10:34,253 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testFailRemoveGroup/e4181d58172330aaeec03c5497896cc1 2023-07-24 23:10:34,256 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e4181d58172330aaeec03c5497896cc1 2023-07-24 23:10:34,257 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e4181d58172330aaeec03c5497896cc1; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10812246080, jitterRate=0.006968885660171509}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 23:10:34,257 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e4181d58172330aaeec03c5497896cc1: 2023-07-24 23:10:34,258 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1., pid=86, masterSystemTime=1690240234241 2023-07-24 23:10:34,261 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1. 2023-07-24 23:10:34,261 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1. 2023-07-24 23:10:34,262 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=e4181d58172330aaeec03c5497896cc1, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,36981,1690240220580 2023-07-24 23:10:34,262 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690240234261"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690240234261"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690240234261"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690240234261"}]},"ts":"1690240234261"} 2023-07-24 23:10:34,267 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=86, resume processing ppid=84 2023-07-24 23:10:34,267 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=86, ppid=84, state=SUCCESS; OpenRegionProcedure e4181d58172330aaeec03c5497896cc1, server=jenkins-hbase4.apache.org,36981,1690240220580 in 175 msec 2023-07-24 23:10:34,270 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=84, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=e4181d58172330aaeec03c5497896cc1, REOPEN/MOVE in 504 msec 2023-07-24 23:10:34,275 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-24 23:10:34,765 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] procedure.ProcedureSyncWait(216): waitFor pid=84 2023-07-24 23:10:34,766 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group bar. 2023-07-24 23:10:34,766 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 23:10:34,769 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:34,770 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:34,772 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-24 23:10:34,772 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:34,773 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-24 23:10:34,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:490) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 23:10:34,774 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.CallRunner(144): callId: 287 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:34864 deadline: 1690241434773, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. 2023-07-24 23:10:34,775 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33649, jenkins-hbase4.apache.org:36981, jenkins-hbase4.apache.org:42429] to rsgroup default 2023-07-24 23:10:34,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:428) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 23:10:34,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.CallRunner(144): callId: 289 service: MasterService methodName: ExecMasterService size: 188 connection: 172.31.14.131:34864 deadline: 1690241434775, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. 2023-07-24 23:10:34,777 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup default 2023-07-24 23:10:34,779 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:34,780 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-24 23:10:34,780 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:34,781 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 23:10:34,782 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup default 2023-07-24 23:10:34,782 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(345): Moving region e4181d58172330aaeec03c5497896cc1 to RSGroup default 2023-07-24 23:10:34,783 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] procedure2.ProcedureExecutor(1029): Stored pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=e4181d58172330aaeec03c5497896cc1, REOPEN/MOVE 2023-07-24 23:10:34,783 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-24 23:10:34,784 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=e4181d58172330aaeec03c5497896cc1, REOPEN/MOVE 2023-07-24 23:10:34,785 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=e4181d58172330aaeec03c5497896cc1, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36981,1690240220580 2023-07-24 23:10:34,785 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690240234785"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240234785"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240234785"}]},"ts":"1690240234785"} 2023-07-24 23:10:34,789 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=88, ppid=87, state=RUNNABLE; CloseRegionProcedure e4181d58172330aaeec03c5497896cc1, server=jenkins-hbase4.apache.org,36981,1690240220580}] 2023-07-24 23:10:34,942 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e4181d58172330aaeec03c5497896cc1 2023-07-24 23:10:34,945 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e4181d58172330aaeec03c5497896cc1, disabling compactions & flushes 2023-07-24 23:10:34,945 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1. 2023-07-24 23:10:34,945 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1. 2023-07-24 23:10:34,945 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1. after waiting 0 ms 2023-07-24 23:10:34,945 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1. 2023-07-24 23:10:34,949 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testFailRemoveGroup/e4181d58172330aaeec03c5497896cc1/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 23:10:34,951 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1. 2023-07-24 23:10:34,951 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e4181d58172330aaeec03c5497896cc1: 2023-07-24 23:10:34,951 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding e4181d58172330aaeec03c5497896cc1 move to jenkins-hbase4.apache.org,46215,1690240224735 record at close sequenceid=5 2023-07-24 23:10:34,953 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e4181d58172330aaeec03c5497896cc1 2023-07-24 23:10:34,953 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=e4181d58172330aaeec03c5497896cc1, regionState=CLOSED 2023-07-24 23:10:34,953 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690240234953"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240234953"}]},"ts":"1690240234953"} 2023-07-24 23:10:34,957 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=88, resume processing ppid=87 2023-07-24 23:10:34,957 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=88, ppid=87, state=SUCCESS; CloseRegionProcedure e4181d58172330aaeec03c5497896cc1, server=jenkins-hbase4.apache.org,36981,1690240220580 in 168 msec 2023-07-24 23:10:34,958 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=e4181d58172330aaeec03c5497896cc1, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,46215,1690240224735; forceNewPlan=false, retain=false 2023-07-24 23:10:35,108 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=e4181d58172330aaeec03c5497896cc1, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46215,1690240224735 2023-07-24 23:10:35,109 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690240235108"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240235108"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240235108"}]},"ts":"1690240235108"} 2023-07-24 23:10:35,111 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=89, ppid=87, state=RUNNABLE; OpenRegionProcedure e4181d58172330aaeec03c5497896cc1, server=jenkins-hbase4.apache.org,46215,1690240224735}] 2023-07-24 23:10:35,269 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1. 2023-07-24 23:10:35,270 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e4181d58172330aaeec03c5497896cc1, NAME => 'Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1.', STARTKEY => '', ENDKEY => ''} 2023-07-24 23:10:35,270 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup e4181d58172330aaeec03c5497896cc1 2023-07-24 23:10:35,270 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:35,270 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e4181d58172330aaeec03c5497896cc1 2023-07-24 23:10:35,270 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e4181d58172330aaeec03c5497896cc1 2023-07-24 23:10:35,272 INFO [StoreOpener-e4181d58172330aaeec03c5497896cc1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region e4181d58172330aaeec03c5497896cc1 2023-07-24 23:10:35,273 DEBUG [StoreOpener-e4181d58172330aaeec03c5497896cc1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testFailRemoveGroup/e4181d58172330aaeec03c5497896cc1/f 2023-07-24 23:10:35,273 DEBUG [StoreOpener-e4181d58172330aaeec03c5497896cc1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testFailRemoveGroup/e4181d58172330aaeec03c5497896cc1/f 2023-07-24 23:10:35,273 INFO [StoreOpener-e4181d58172330aaeec03c5497896cc1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e4181d58172330aaeec03c5497896cc1 columnFamilyName f 2023-07-24 23:10:35,274 INFO [StoreOpener-e4181d58172330aaeec03c5497896cc1-1] regionserver.HStore(310): Store=e4181d58172330aaeec03c5497896cc1/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:35,275 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testFailRemoveGroup/e4181d58172330aaeec03c5497896cc1 2023-07-24 23:10:35,276 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testFailRemoveGroup/e4181d58172330aaeec03c5497896cc1 2023-07-24 23:10:35,278 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e4181d58172330aaeec03c5497896cc1 2023-07-24 23:10:35,279 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e4181d58172330aaeec03c5497896cc1; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12041151360, jitterRate=0.12141960859298706}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 23:10:35,279 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e4181d58172330aaeec03c5497896cc1: 2023-07-24 23:10:35,281 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1., pid=89, masterSystemTime=1690240235265 2023-07-24 23:10:35,282 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1. 2023-07-24 23:10:35,282 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1. 2023-07-24 23:10:35,283 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=e4181d58172330aaeec03c5497896cc1, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,46215,1690240224735 2023-07-24 23:10:35,283 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690240235283"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690240235283"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690240235283"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690240235283"}]},"ts":"1690240235283"} 2023-07-24 23:10:35,286 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=89, resume processing ppid=87 2023-07-24 23:10:35,286 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=89, ppid=87, state=SUCCESS; OpenRegionProcedure e4181d58172330aaeec03c5497896cc1, server=jenkins-hbase4.apache.org,46215,1690240224735 in 173 msec 2023-07-24 23:10:35,287 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=87, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=e4181d58172330aaeec03c5497896cc1, REOPEN/MOVE in 503 msec 2023-07-24 23:10:35,783 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] procedure.ProcedureSyncWait(216): waitFor pid=87 2023-07-24 23:10:35,784 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group default. 2023-07-24 23:10:35,784 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 23:10:35,788 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:35,788 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:35,792 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-24 23:10:35,793 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:496) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 23:10:35,793 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.CallRunner(144): callId: 296 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:34864 deadline: 1690241435792, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. 2023-07-24 23:10:35,794 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33649, jenkins-hbase4.apache.org:36981, jenkins-hbase4.apache.org:42429] to rsgroup default 2023-07-24 23:10:35,800 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:35,801 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-24 23:10:35,801 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:35,802 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 23:10:35,803 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group bar, current retry=0 2023-07-24 23:10:35,803 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33649,1690240221185, jenkins-hbase4.apache.org,36981,1690240220580, jenkins-hbase4.apache.org,42429,1690240220974] are moved back to bar 2023-07-24 23:10:35,803 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(438): Move servers done: bar => default 2023-07-24 23:10:35,804 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 23:10:35,811 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:35,811 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:35,815 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-24 23:10:35,816 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=42429] ipc.CallRunner(144): callId: 211 service: ClientService methodName: Scan size: 147 connection: 172.31.14.131:54528 deadline: 1690240295816, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=46215 startCode=1690240224735. As of locationSeqNum=6. 2023-07-24 23:10:35,930 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:35,931 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:35,931 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-24 23:10:35,934 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 23:10:35,941 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:35,941 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:35,944 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:35,944 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:35,946 INFO [Listener at localhost/39785] client.HBaseAdmin$15(890): Started disable of Group_testFailRemoveGroup 2023-07-24 23:10:35,946 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testFailRemoveGroup 2023-07-24 23:10:35,947 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] procedure2.ProcedureExecutor(1029): Stored pid=90, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testFailRemoveGroup 2023-07-24 23:10:35,951 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-24 23:10:35,951 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690240235951"}]},"ts":"1690240235951"} 2023-07-24 23:10:35,953 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLING in hbase:meta 2023-07-24 23:10:35,956 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set Group_testFailRemoveGroup to state=DISABLING 2023-07-24 23:10:35,957 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=91, ppid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=e4181d58172330aaeec03c5497896cc1, UNASSIGN}] 2023-07-24 23:10:35,960 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=91, ppid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=e4181d58172330aaeec03c5497896cc1, UNASSIGN 2023-07-24 23:10:35,962 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=91 updating hbase:meta row=e4181d58172330aaeec03c5497896cc1, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46215,1690240224735 2023-07-24 23:10:35,962 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690240235962"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240235962"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240235962"}]},"ts":"1690240235962"} 2023-07-24 23:10:35,964 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=92, ppid=91, state=RUNNABLE; CloseRegionProcedure e4181d58172330aaeec03c5497896cc1, server=jenkins-hbase4.apache.org,46215,1690240224735}] 2023-07-24 23:10:36,053 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-24 23:10:36,120 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e4181d58172330aaeec03c5497896cc1 2023-07-24 23:10:36,121 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e4181d58172330aaeec03c5497896cc1, disabling compactions & flushes 2023-07-24 23:10:36,121 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1. 2023-07-24 23:10:36,121 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1. 2023-07-24 23:10:36,121 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1. after waiting 0 ms 2023-07-24 23:10:36,121 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1. 2023-07-24 23:10:36,125 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testFailRemoveGroup/e4181d58172330aaeec03c5497896cc1/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-24 23:10:36,126 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1. 2023-07-24 23:10:36,126 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e4181d58172330aaeec03c5497896cc1: 2023-07-24 23:10:36,127 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e4181d58172330aaeec03c5497896cc1 2023-07-24 23:10:36,128 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=91 updating hbase:meta row=e4181d58172330aaeec03c5497896cc1, regionState=CLOSED 2023-07-24 23:10:36,128 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690240236128"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240236128"}]},"ts":"1690240236128"} 2023-07-24 23:10:36,131 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=92, resume processing ppid=91 2023-07-24 23:10:36,131 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=92, ppid=91, state=SUCCESS; CloseRegionProcedure e4181d58172330aaeec03c5497896cc1, server=jenkins-hbase4.apache.org,46215,1690240224735 in 165 msec 2023-07-24 23:10:36,132 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=91, resume processing ppid=90 2023-07-24 23:10:36,132 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=91, ppid=90, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=e4181d58172330aaeec03c5497896cc1, UNASSIGN in 175 msec 2023-07-24 23:10:36,133 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690240236133"}]},"ts":"1690240236133"} 2023-07-24 23:10:36,134 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLED in hbase:meta 2023-07-24 23:10:36,136 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set Group_testFailRemoveGroup to state=DISABLED 2023-07-24 23:10:36,137 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=90, state=SUCCESS; DisableTableProcedure table=Group_testFailRemoveGroup in 189 msec 2023-07-24 23:10:36,254 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-24 23:10:36,254 INFO [Listener at localhost/39785] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testFailRemoveGroup, procId: 90 completed 2023-07-24 23:10:36,255 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testFailRemoveGroup 2023-07-24 23:10:36,257 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] procedure2.ProcedureExecutor(1029): Stored pid=93, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-24 23:10:36,259 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=93, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-24 23:10:36,259 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testFailRemoveGroup' from rsgroup 'default' 2023-07-24 23:10:36,261 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=93, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-24 23:10:36,263 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:36,263 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:36,264 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 23:10:36,266 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testFailRemoveGroup/e4181d58172330aaeec03c5497896cc1 2023-07-24 23:10:36,267 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-24 23:10:36,268 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testFailRemoveGroup/e4181d58172330aaeec03c5497896cc1/f, FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testFailRemoveGroup/e4181d58172330aaeec03c5497896cc1/recovered.edits] 2023-07-24 23:10:36,274 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testFailRemoveGroup/e4181d58172330aaeec03c5497896cc1/recovered.edits/10.seqid to hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/archive/data/default/Group_testFailRemoveGroup/e4181d58172330aaeec03c5497896cc1/recovered.edits/10.seqid 2023-07-24 23:10:36,275 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testFailRemoveGroup/e4181d58172330aaeec03c5497896cc1 2023-07-24 23:10:36,275 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-24 23:10:36,278 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=93, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-24 23:10:36,281 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testFailRemoveGroup from hbase:meta 2023-07-24 23:10:36,283 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'Group_testFailRemoveGroup' descriptor. 2023-07-24 23:10:36,284 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=93, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-24 23:10:36,284 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'Group_testFailRemoveGroup' from region states. 2023-07-24 23:10:36,284 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690240236284"}]},"ts":"9223372036854775807"} 2023-07-24 23:10:36,286 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-24 23:10:36,286 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => e4181d58172330aaeec03c5497896cc1, NAME => 'Group_testFailRemoveGroup,,1690240233138.e4181d58172330aaeec03c5497896cc1.', STARTKEY => '', ENDKEY => ''}] 2023-07-24 23:10:36,286 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'Group_testFailRemoveGroup' as deleted. 2023-07-24 23:10:36,286 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690240236286"}]},"ts":"9223372036854775807"} 2023-07-24 23:10:36,288 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table Group_testFailRemoveGroup state from META 2023-07-24 23:10:36,290 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=93, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-24 23:10:36,292 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=93, state=SUCCESS; DeleteTableProcedure table=Group_testFailRemoveGroup in 34 msec 2023-07-24 23:10:36,369 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-24 23:10:36,369 INFO [Listener at localhost/39785] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testFailRemoveGroup, procId: 93 completed 2023-07-24 23:10:36,373 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:36,373 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:36,374 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 23:10:36,374 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 23:10:36,374 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 23:10:36,375 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 23:10:36,375 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 23:10:36,376 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 23:10:36,381 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:36,381 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 23:10:36,385 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 23:10:36,391 INFO [Listener at localhost/39785] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 23:10:36,392 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 23:10:36,396 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:36,396 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:36,398 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 23:10:36,399 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 23:10:36,403 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:36,403 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:36,407 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42959] to rsgroup master 2023-07-24 23:10:36,407 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 23:10:36,407 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.CallRunner(144): callId: 344 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34864 deadline: 1690241436406, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. 2023-07-24 23:10:36,407 WARN [Listener at localhost/39785] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 23:10:36,409 INFO [Listener at localhost/39785] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:36,410 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:36,411 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:36,414 INFO [Listener at localhost/39785] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33649, jenkins-hbase4.apache.org:36981, jenkins-hbase4.apache.org:42429, jenkins-hbase4.apache.org:46215], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 23:10:36,415 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 23:10:36,415 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:36,440 INFO [Listener at localhost/39785] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=511 (was 507) Potentially hanging thread: hconnection-0x63581179-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x68365814-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x68365814-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63581179-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63581179-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1519090482_17 at /127.0.0.1:49240 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x68365814-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-649929925_17 at /127.0.0.1:43794 [Waiting for operation #9] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x68365814-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1519090482_17 at /127.0.0.1:42288 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63581179-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x45c34053-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/cluster_a5dd96c0-6e83-fd59-c008-51f91e0cf7a8/dfs/data/data6/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x68365814-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/cluster_a5dd96c0-6e83-fd59-c008-51f91e0cf7a8/dfs/data/data5/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63581179-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x68365814-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=809 (was 805) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=396 (was 413), ProcessCount=177 (was 177), AvailableMemoryMB=6038 (was 6223) 2023-07-24 23:10:36,441 WARN [Listener at localhost/39785] hbase.ResourceChecker(130): Thread=511 is superior to 500 2023-07-24 23:10:36,463 INFO [Listener at localhost/39785] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=511, OpenFileDescriptor=809, MaxFileDescriptor=60000, SystemLoadAverage=396, ProcessCount=177, AvailableMemoryMB=6036 2023-07-24 23:10:36,463 WARN [Listener at localhost/39785] hbase.ResourceChecker(130): Thread=511 is superior to 500 2023-07-24 23:10:36,464 INFO [Listener at localhost/39785] rsgroup.TestRSGroupsBase(132): testMultiTableMove 2023-07-24 23:10:36,469 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:36,469 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:36,470 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 23:10:36,470 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 23:10:36,470 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 23:10:36,471 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 23:10:36,471 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 23:10:36,472 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 23:10:36,476 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:36,477 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 23:10:36,479 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 23:10:36,483 INFO [Listener at localhost/39785] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 23:10:36,484 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 23:10:36,486 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:36,487 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:36,488 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 23:10:36,489 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 23:10:36,493 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:36,493 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:36,495 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42959] to rsgroup master 2023-07-24 23:10:36,495 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 23:10:36,496 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.CallRunner(144): callId: 372 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34864 deadline: 1690241436495, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. 2023-07-24 23:10:36,496 WARN [Listener at localhost/39785] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 23:10:36,502 INFO [Listener at localhost/39785] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:36,503 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:36,503 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:36,503 INFO [Listener at localhost/39785] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33649, jenkins-hbase4.apache.org:36981, jenkins-hbase4.apache.org:42429, jenkins-hbase4.apache.org:46215], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 23:10:36,504 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 23:10:36,504 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:36,505 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 23:10:36,505 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:36,506 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testMultiTableMove_102387135 2023-07-24 23:10:36,509 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_102387135 2023-07-24 23:10:36,511 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:36,511 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:36,511 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 23:10:36,514 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 23:10:36,517 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:36,517 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:36,520 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33649] to rsgroup Group_testMultiTableMove_102387135 2023-07-24 23:10:36,522 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:36,523 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_102387135 2023-07-24 23:10:36,523 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:36,524 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 23:10:36,526 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-24 23:10:36,526 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33649,1690240221185] are moved back to default 2023-07-24 23:10:36,526 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testMultiTableMove_102387135 2023-07-24 23:10:36,526 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 23:10:36,531 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:36,531 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:36,534 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_102387135 2023-07-24 23:10:36,534 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:36,537 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 23:10:36,538 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] procedure2.ProcedureExecutor(1029): Stored pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveA 2023-07-24 23:10:36,541 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 23:10:36,541 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveA" procId is: 94 2023-07-24 23:10:36,543 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-24 23:10:36,544 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:36,545 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_102387135 2023-07-24 23:10:36,545 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:36,546 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 23:10:36,551 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 23:10:36,555 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/GrouptestMultiTableMoveA/812558f4905722312bd5dd9f296ef5e3 2023-07-24 23:10:36,556 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/GrouptestMultiTableMoveA/812558f4905722312bd5dd9f296ef5e3 empty. 2023-07-24 23:10:36,557 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/GrouptestMultiTableMoveA/812558f4905722312bd5dd9f296ef5e3 2023-07-24 23:10:36,557 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-24 23:10:36,576 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/GrouptestMultiTableMoveA/.tabledesc/.tableinfo.0000000001 2023-07-24 23:10:36,577 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(7675): creating {ENCODED => 812558f4905722312bd5dd9f296ef5e3, NAME => 'GrouptestMultiTableMoveA,,1690240236537.812558f4905722312bd5dd9f296ef5e3.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp 2023-07-24 23:10:36,607 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1690240236537.812558f4905722312bd5dd9f296ef5e3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:36,607 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1604): Closing 812558f4905722312bd5dd9f296ef5e3, disabling compactions & flushes 2023-07-24 23:10:36,607 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1690240236537.812558f4905722312bd5dd9f296ef5e3. 2023-07-24 23:10:36,607 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1690240236537.812558f4905722312bd5dd9f296ef5e3. 2023-07-24 23:10:36,607 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1690240236537.812558f4905722312bd5dd9f296ef5e3. after waiting 0 ms 2023-07-24 23:10:36,607 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1690240236537.812558f4905722312bd5dd9f296ef5e3. 2023-07-24 23:10:36,607 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1690240236537.812558f4905722312bd5dd9f296ef5e3. 2023-07-24 23:10:36,608 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1558): Region close journal for 812558f4905722312bd5dd9f296ef5e3: 2023-07-24 23:10:36,611 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 23:10:36,612 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1690240236537.812558f4905722312bd5dd9f296ef5e3.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690240236612"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240236612"}]},"ts":"1690240236612"} 2023-07-24 23:10:36,615 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 23:10:36,616 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 23:10:36,616 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690240236616"}]},"ts":"1690240236616"} 2023-07-24 23:10:36,618 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLING in hbase:meta 2023-07-24 23:10:36,623 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 23:10:36,623 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 23:10:36,623 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 23:10:36,623 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 23:10:36,623 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 23:10:36,623 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=812558f4905722312bd5dd9f296ef5e3, ASSIGN}] 2023-07-24 23:10:36,626 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=812558f4905722312bd5dd9f296ef5e3, ASSIGN 2023-07-24 23:10:36,628 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=812558f4905722312bd5dd9f296ef5e3, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,36981,1690240220580; forceNewPlan=false, retain=false 2023-07-24 23:10:36,645 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-24 23:10:36,778 INFO [jenkins-hbase4:42959] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 23:10:36,780 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=812558f4905722312bd5dd9f296ef5e3, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36981,1690240220580 2023-07-24 23:10:36,780 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1690240236537.812558f4905722312bd5dd9f296ef5e3.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690240236780"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240236780"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240236780"}]},"ts":"1690240236780"} 2023-07-24 23:10:36,782 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=96, ppid=95, state=RUNNABLE; OpenRegionProcedure 812558f4905722312bd5dd9f296ef5e3, server=jenkins-hbase4.apache.org,36981,1690240220580}] 2023-07-24 23:10:36,846 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-24 23:10:36,938 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1690240236537.812558f4905722312bd5dd9f296ef5e3. 2023-07-24 23:10:36,938 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 812558f4905722312bd5dd9f296ef5e3, NAME => 'GrouptestMultiTableMoveA,,1690240236537.812558f4905722312bd5dd9f296ef5e3.', STARTKEY => '', ENDKEY => ''} 2023-07-24 23:10:36,938 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 812558f4905722312bd5dd9f296ef5e3 2023-07-24 23:10:36,938 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1690240236537.812558f4905722312bd5dd9f296ef5e3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:36,938 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 812558f4905722312bd5dd9f296ef5e3 2023-07-24 23:10:36,939 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 812558f4905722312bd5dd9f296ef5e3 2023-07-24 23:10:36,940 INFO [StoreOpener-812558f4905722312bd5dd9f296ef5e3-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 812558f4905722312bd5dd9f296ef5e3 2023-07-24 23:10:36,942 DEBUG [StoreOpener-812558f4905722312bd5dd9f296ef5e3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/GrouptestMultiTableMoveA/812558f4905722312bd5dd9f296ef5e3/f 2023-07-24 23:10:36,942 DEBUG [StoreOpener-812558f4905722312bd5dd9f296ef5e3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/GrouptestMultiTableMoveA/812558f4905722312bd5dd9f296ef5e3/f 2023-07-24 23:10:36,942 INFO [StoreOpener-812558f4905722312bd5dd9f296ef5e3-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 812558f4905722312bd5dd9f296ef5e3 columnFamilyName f 2023-07-24 23:10:36,943 INFO [StoreOpener-812558f4905722312bd5dd9f296ef5e3-1] regionserver.HStore(310): Store=812558f4905722312bd5dd9f296ef5e3/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:36,944 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/GrouptestMultiTableMoveA/812558f4905722312bd5dd9f296ef5e3 2023-07-24 23:10:36,944 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/GrouptestMultiTableMoveA/812558f4905722312bd5dd9f296ef5e3 2023-07-24 23:10:36,948 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 812558f4905722312bd5dd9f296ef5e3 2023-07-24 23:10:36,950 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/GrouptestMultiTableMoveA/812558f4905722312bd5dd9f296ef5e3/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 23:10:36,950 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 812558f4905722312bd5dd9f296ef5e3; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11751394240, jitterRate=0.09443387389183044}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 23:10:36,951 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 812558f4905722312bd5dd9f296ef5e3: 2023-07-24 23:10:36,951 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1690240236537.812558f4905722312bd5dd9f296ef5e3., pid=96, masterSystemTime=1690240236933 2023-07-24 23:10:36,953 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1690240236537.812558f4905722312bd5dd9f296ef5e3. 2023-07-24 23:10:36,953 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1690240236537.812558f4905722312bd5dd9f296ef5e3. 2023-07-24 23:10:36,954 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=812558f4905722312bd5dd9f296ef5e3, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,36981,1690240220580 2023-07-24 23:10:36,954 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1690240236537.812558f4905722312bd5dd9f296ef5e3.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690240236954"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690240236954"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690240236954"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690240236954"}]},"ts":"1690240236954"} 2023-07-24 23:10:36,958 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=96, resume processing ppid=95 2023-07-24 23:10:36,958 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=96, ppid=95, state=SUCCESS; OpenRegionProcedure 812558f4905722312bd5dd9f296ef5e3, server=jenkins-hbase4.apache.org,36981,1690240220580 in 174 msec 2023-07-24 23:10:36,960 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=95, resume processing ppid=94 2023-07-24 23:10:36,960 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=95, ppid=94, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=812558f4905722312bd5dd9f296ef5e3, ASSIGN in 335 msec 2023-07-24 23:10:36,960 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 23:10:36,961 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690240236960"}]},"ts":"1690240236960"} 2023-07-24 23:10:36,962 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLED in hbase:meta 2023-07-24 23:10:36,964 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 23:10:36,965 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=94, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveA in 427 msec 2023-07-24 23:10:37,044 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-24 23:10:37,046 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-24 23:10:37,147 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-24 23:10:37,148 INFO [Listener at localhost/39785] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveA, procId: 94 completed 2023-07-24 23:10:37,148 DEBUG [Listener at localhost/39785] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveA get assigned. Timeout = 60000ms 2023-07-24 23:10:37,148 INFO [Listener at localhost/39785] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:37,154 INFO [Listener at localhost/39785] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveA assigned to meta. Checking AM states. 2023-07-24 23:10:37,155 INFO [Listener at localhost/39785] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:37,155 INFO [Listener at localhost/39785] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveA assigned. 2023-07-24 23:10:37,160 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 23:10:37,167 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] procedure2.ProcedureExecutor(1029): Stored pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveB 2023-07-24 23:10:37,170 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 23:10:37,170 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveB" procId is: 97 2023-07-24 23:10:37,172 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-24 23:10:37,173 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:37,173 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_102387135 2023-07-24 23:10:37,174 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:37,174 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 23:10:37,178 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 23:10:37,181 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/GrouptestMultiTableMoveB/46b7b0fc25cd923ee68b903e8b4da211 2023-07-24 23:10:37,182 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/GrouptestMultiTableMoveB/46b7b0fc25cd923ee68b903e8b4da211 empty. 2023-07-24 23:10:37,182 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/GrouptestMultiTableMoveB/46b7b0fc25cd923ee68b903e8b4da211 2023-07-24 23:10:37,182 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-24 23:10:37,208 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/GrouptestMultiTableMoveB/.tabledesc/.tableinfo.0000000001 2023-07-24 23:10:37,210 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(7675): creating {ENCODED => 46b7b0fc25cd923ee68b903e8b4da211, NAME => 'GrouptestMultiTableMoveB,,1690240237160.46b7b0fc25cd923ee68b903e8b4da211.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp 2023-07-24 23:10:37,243 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1690240237160.46b7b0fc25cd923ee68b903e8b4da211.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:37,243 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1604): Closing 46b7b0fc25cd923ee68b903e8b4da211, disabling compactions & flushes 2023-07-24 23:10:37,243 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1690240237160.46b7b0fc25cd923ee68b903e8b4da211. 2023-07-24 23:10:37,243 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1690240237160.46b7b0fc25cd923ee68b903e8b4da211. 2023-07-24 23:10:37,243 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1690240237160.46b7b0fc25cd923ee68b903e8b4da211. after waiting 0 ms 2023-07-24 23:10:37,243 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1690240237160.46b7b0fc25cd923ee68b903e8b4da211. 2023-07-24 23:10:37,243 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1690240237160.46b7b0fc25cd923ee68b903e8b4da211. 2023-07-24 23:10:37,243 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1558): Region close journal for 46b7b0fc25cd923ee68b903e8b4da211: 2023-07-24 23:10:37,246 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 23:10:37,247 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1690240237160.46b7b0fc25cd923ee68b903e8b4da211.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690240237246"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240237246"}]},"ts":"1690240237246"} 2023-07-24 23:10:37,255 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 23:10:37,256 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 23:10:37,256 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690240237256"}]},"ts":"1690240237256"} 2023-07-24 23:10:37,258 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLING in hbase:meta 2023-07-24 23:10:37,262 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 23:10:37,262 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 23:10:37,262 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 23:10:37,262 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 23:10:37,262 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 23:10:37,262 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=46b7b0fc25cd923ee68b903e8b4da211, ASSIGN}] 2023-07-24 23:10:37,264 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=46b7b0fc25cd923ee68b903e8b4da211, ASSIGN 2023-07-24 23:10:37,265 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=46b7b0fc25cd923ee68b903e8b4da211, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46215,1690240224735; forceNewPlan=false, retain=false 2023-07-24 23:10:37,273 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-24 23:10:37,416 INFO [jenkins-hbase4:42959] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 23:10:37,417 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=46b7b0fc25cd923ee68b903e8b4da211, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46215,1690240224735 2023-07-24 23:10:37,418 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1690240237160.46b7b0fc25cd923ee68b903e8b4da211.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690240237417"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240237417"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240237417"}]},"ts":"1690240237417"} 2023-07-24 23:10:37,420 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=99, ppid=98, state=RUNNABLE; OpenRegionProcedure 46b7b0fc25cd923ee68b903e8b4da211, server=jenkins-hbase4.apache.org,46215,1690240224735}] 2023-07-24 23:10:37,474 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-24 23:10:37,575 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1690240237160.46b7b0fc25cd923ee68b903e8b4da211. 2023-07-24 23:10:37,576 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 46b7b0fc25cd923ee68b903e8b4da211, NAME => 'GrouptestMultiTableMoveB,,1690240237160.46b7b0fc25cd923ee68b903e8b4da211.', STARTKEY => '', ENDKEY => ''} 2023-07-24 23:10:37,576 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 46b7b0fc25cd923ee68b903e8b4da211 2023-07-24 23:10:37,576 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1690240237160.46b7b0fc25cd923ee68b903e8b4da211.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:37,576 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 46b7b0fc25cd923ee68b903e8b4da211 2023-07-24 23:10:37,576 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 46b7b0fc25cd923ee68b903e8b4da211 2023-07-24 23:10:37,577 INFO [StoreOpener-46b7b0fc25cd923ee68b903e8b4da211-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 46b7b0fc25cd923ee68b903e8b4da211 2023-07-24 23:10:37,579 DEBUG [StoreOpener-46b7b0fc25cd923ee68b903e8b4da211-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/GrouptestMultiTableMoveB/46b7b0fc25cd923ee68b903e8b4da211/f 2023-07-24 23:10:37,579 DEBUG [StoreOpener-46b7b0fc25cd923ee68b903e8b4da211-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/GrouptestMultiTableMoveB/46b7b0fc25cd923ee68b903e8b4da211/f 2023-07-24 23:10:37,580 INFO [StoreOpener-46b7b0fc25cd923ee68b903e8b4da211-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 46b7b0fc25cd923ee68b903e8b4da211 columnFamilyName f 2023-07-24 23:10:37,580 INFO [StoreOpener-46b7b0fc25cd923ee68b903e8b4da211-1] regionserver.HStore(310): Store=46b7b0fc25cd923ee68b903e8b4da211/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:37,581 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/GrouptestMultiTableMoveB/46b7b0fc25cd923ee68b903e8b4da211 2023-07-24 23:10:37,581 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/GrouptestMultiTableMoveB/46b7b0fc25cd923ee68b903e8b4da211 2023-07-24 23:10:37,584 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 46b7b0fc25cd923ee68b903e8b4da211 2023-07-24 23:10:37,586 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/GrouptestMultiTableMoveB/46b7b0fc25cd923ee68b903e8b4da211/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 23:10:37,587 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 46b7b0fc25cd923ee68b903e8b4da211; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9424819840, jitterRate=-0.12224525213241577}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 23:10:37,587 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 46b7b0fc25cd923ee68b903e8b4da211: 2023-07-24 23:10:37,588 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1690240237160.46b7b0fc25cd923ee68b903e8b4da211., pid=99, masterSystemTime=1690240237572 2023-07-24 23:10:37,591 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1690240237160.46b7b0fc25cd923ee68b903e8b4da211. 2023-07-24 23:10:37,591 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1690240237160.46b7b0fc25cd923ee68b903e8b4da211. 2023-07-24 23:10:37,591 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=46b7b0fc25cd923ee68b903e8b4da211, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46215,1690240224735 2023-07-24 23:10:37,591 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1690240237160.46b7b0fc25cd923ee68b903e8b4da211.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690240237591"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690240237591"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690240237591"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690240237591"}]},"ts":"1690240237591"} 2023-07-24 23:10:37,600 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=99, resume processing ppid=98 2023-07-24 23:10:37,600 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=99, ppid=98, state=SUCCESS; OpenRegionProcedure 46b7b0fc25cd923ee68b903e8b4da211, server=jenkins-hbase4.apache.org,46215,1690240224735 in 178 msec 2023-07-24 23:10:37,603 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=98, resume processing ppid=97 2023-07-24 23:10:37,603 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=98, ppid=97, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=46b7b0fc25cd923ee68b903e8b4da211, ASSIGN in 338 msec 2023-07-24 23:10:37,603 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 23:10:37,604 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690240237603"}]},"ts":"1690240237603"} 2023-07-24 23:10:37,605 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLED in hbase:meta 2023-07-24 23:10:37,607 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 23:10:37,609 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=97, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveB in 447 msec 2023-07-24 23:10:37,776 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-24 23:10:37,776 INFO [Listener at localhost/39785] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveB, procId: 97 completed 2023-07-24 23:10:37,776 DEBUG [Listener at localhost/39785] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveB get assigned. Timeout = 60000ms 2023-07-24 23:10:37,776 INFO [Listener at localhost/39785] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:37,785 INFO [Listener at localhost/39785] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveB assigned to meta. Checking AM states. 2023-07-24 23:10:37,786 INFO [Listener at localhost/39785] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:37,786 INFO [Listener at localhost/39785] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveB assigned. 2023-07-24 23:10:37,787 INFO [Listener at localhost/39785] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:37,810 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-24 23:10:37,811 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 23:10:37,812 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-24 23:10:37,812 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 23:10:37,813 INFO [Listener at localhost/39785] rsgroup.TestRSGroupsAdmin1(262): Moving table [GrouptestMultiTableMoveA,GrouptestMultiTableMoveB] to Group_testMultiTableMove_102387135 2023-07-24 23:10:37,816 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] to rsgroup Group_testMultiTableMove_102387135 2023-07-24 23:10:37,820 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:37,821 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_102387135 2023-07-24 23:10:37,822 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:37,822 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 23:10:37,825 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveB to RSGroup Group_testMultiTableMove_102387135 2023-07-24 23:10:37,825 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(345): Moving region 46b7b0fc25cd923ee68b903e8b4da211 to RSGroup Group_testMultiTableMove_102387135 2023-07-24 23:10:37,826 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] procedure2.ProcedureExecutor(1029): Stored pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=46b7b0fc25cd923ee68b903e8b4da211, REOPEN/MOVE 2023-07-24 23:10:37,826 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveA to RSGroup Group_testMultiTableMove_102387135 2023-07-24 23:10:37,826 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(345): Moving region 812558f4905722312bd5dd9f296ef5e3 to RSGroup Group_testMultiTableMove_102387135 2023-07-24 23:10:37,828 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=46b7b0fc25cd923ee68b903e8b4da211, REOPEN/MOVE 2023-07-24 23:10:37,828 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] procedure2.ProcedureExecutor(1029): Stored pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=812558f4905722312bd5dd9f296ef5e3, REOPEN/MOVE 2023-07-24 23:10:37,828 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=46b7b0fc25cd923ee68b903e8b4da211, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46215,1690240224735 2023-07-24 23:10:37,829 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group Group_testMultiTableMove_102387135, current retry=0 2023-07-24 23:10:37,830 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=812558f4905722312bd5dd9f296ef5e3, REOPEN/MOVE 2023-07-24 23:10:37,830 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1690240237160.46b7b0fc25cd923ee68b903e8b4da211.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690240237828"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240237828"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240237828"}]},"ts":"1690240237828"} 2023-07-24 23:10:37,835 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=100, state=RUNNABLE; CloseRegionProcedure 46b7b0fc25cd923ee68b903e8b4da211, server=jenkins-hbase4.apache.org,46215,1690240224735}] 2023-07-24 23:10:37,835 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=812558f4905722312bd5dd9f296ef5e3, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36981,1690240220580 2023-07-24 23:10:37,835 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1690240236537.812558f4905722312bd5dd9f296ef5e3.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690240237835"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240237835"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240237835"}]},"ts":"1690240237835"} 2023-07-24 23:10:37,846 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=103, ppid=101, state=RUNNABLE; CloseRegionProcedure 812558f4905722312bd5dd9f296ef5e3, server=jenkins-hbase4.apache.org,36981,1690240220580}] 2023-07-24 23:10:37,997 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 46b7b0fc25cd923ee68b903e8b4da211 2023-07-24 23:10:37,998 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 46b7b0fc25cd923ee68b903e8b4da211, disabling compactions & flushes 2023-07-24 23:10:37,998 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1690240237160.46b7b0fc25cd923ee68b903e8b4da211. 2023-07-24 23:10:37,998 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1690240237160.46b7b0fc25cd923ee68b903e8b4da211. 2023-07-24 23:10:37,998 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1690240237160.46b7b0fc25cd923ee68b903e8b4da211. after waiting 0 ms 2023-07-24 23:10:37,998 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1690240237160.46b7b0fc25cd923ee68b903e8b4da211. 2023-07-24 23:10:38,000 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 812558f4905722312bd5dd9f296ef5e3 2023-07-24 23:10:38,001 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 812558f4905722312bd5dd9f296ef5e3, disabling compactions & flushes 2023-07-24 23:10:38,001 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1690240236537.812558f4905722312bd5dd9f296ef5e3. 2023-07-24 23:10:38,001 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1690240236537.812558f4905722312bd5dd9f296ef5e3. 2023-07-24 23:10:38,001 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1690240236537.812558f4905722312bd5dd9f296ef5e3. after waiting 0 ms 2023-07-24 23:10:38,001 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1690240236537.812558f4905722312bd5dd9f296ef5e3. 2023-07-24 23:10:38,008 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/GrouptestMultiTableMoveB/46b7b0fc25cd923ee68b903e8b4da211/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 23:10:38,008 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/GrouptestMultiTableMoveA/812558f4905722312bd5dd9f296ef5e3/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 23:10:38,009 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1690240237160.46b7b0fc25cd923ee68b903e8b4da211. 2023-07-24 23:10:38,009 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 46b7b0fc25cd923ee68b903e8b4da211: 2023-07-24 23:10:38,009 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 46b7b0fc25cd923ee68b903e8b4da211 move to jenkins-hbase4.apache.org,33649,1690240221185 record at close sequenceid=2 2023-07-24 23:10:38,009 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1690240236537.812558f4905722312bd5dd9f296ef5e3. 2023-07-24 23:10:38,009 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 812558f4905722312bd5dd9f296ef5e3: 2023-07-24 23:10:38,009 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 812558f4905722312bd5dd9f296ef5e3 move to jenkins-hbase4.apache.org,33649,1690240221185 record at close sequenceid=2 2023-07-24 23:10:38,012 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 46b7b0fc25cd923ee68b903e8b4da211 2023-07-24 23:10:38,013 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=46b7b0fc25cd923ee68b903e8b4da211, regionState=CLOSED 2023-07-24 23:10:38,013 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1690240237160.46b7b0fc25cd923ee68b903e8b4da211.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690240238013"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240238013"}]},"ts":"1690240238013"} 2023-07-24 23:10:38,013 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 812558f4905722312bd5dd9f296ef5e3 2023-07-24 23:10:38,015 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=812558f4905722312bd5dd9f296ef5e3, regionState=CLOSED 2023-07-24 23:10:38,015 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1690240236537.812558f4905722312bd5dd9f296ef5e3.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690240238014"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240238014"}]},"ts":"1690240238014"} 2023-07-24 23:10:38,020 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=100 2023-07-24 23:10:38,020 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=100, state=SUCCESS; CloseRegionProcedure 46b7b0fc25cd923ee68b903e8b4da211, server=jenkins-hbase4.apache.org,46215,1690240224735 in 181 msec 2023-07-24 23:10:38,022 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=46b7b0fc25cd923ee68b903e8b4da211, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33649,1690240221185; forceNewPlan=false, retain=false 2023-07-24 23:10:38,022 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=103, resume processing ppid=101 2023-07-24 23:10:38,022 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=103, ppid=101, state=SUCCESS; CloseRegionProcedure 812558f4905722312bd5dd9f296ef5e3, server=jenkins-hbase4.apache.org,36981,1690240220580 in 172 msec 2023-07-24 23:10:38,024 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=812558f4905722312bd5dd9f296ef5e3, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33649,1690240221185; forceNewPlan=false, retain=false 2023-07-24 23:10:38,172 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=812558f4905722312bd5dd9f296ef5e3, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33649,1690240221185 2023-07-24 23:10:38,173 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1690240236537.812558f4905722312bd5dd9f296ef5e3.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690240238172"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240238172"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240238172"}]},"ts":"1690240238172"} 2023-07-24 23:10:38,173 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=46b7b0fc25cd923ee68b903e8b4da211, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33649,1690240221185 2023-07-24 23:10:38,173 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1690240237160.46b7b0fc25cd923ee68b903e8b4da211.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690240238173"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240238173"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240238173"}]},"ts":"1690240238173"} 2023-07-24 23:10:38,177 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=104, ppid=101, state=RUNNABLE; OpenRegionProcedure 812558f4905722312bd5dd9f296ef5e3, server=jenkins-hbase4.apache.org,33649,1690240221185}] 2023-07-24 23:10:38,178 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=105, ppid=100, state=RUNNABLE; OpenRegionProcedure 46b7b0fc25cd923ee68b903e8b4da211, server=jenkins-hbase4.apache.org,33649,1690240221185}] 2023-07-24 23:10:38,467 INFO [AsyncFSWAL-0-hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/MasterData-prefix:jenkins-hbase4.apache.org,42959,1690240218606] wal.AbstractFSWAL(1141): Slow sync cost: 288 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:46677,DS-99f991c9-beb0-41c1-9404-df7150cba31b,DISK], DatanodeInfoWithStorage[127.0.0.1:39741,DS-8317c52f-8ef5-4f17-a0c2-afb6962c43fa,DISK], DatanodeInfoWithStorage[127.0.0.1:46461,DS-ea853878-8ff0-4830-8e4e-e0b850d87b95,DISK]] 2023-07-24 23:10:38,622 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1690240236537.812558f4905722312bd5dd9f296ef5e3. 2023-07-24 23:10:38,622 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 812558f4905722312bd5dd9f296ef5e3, NAME => 'GrouptestMultiTableMoveA,,1690240236537.812558f4905722312bd5dd9f296ef5e3.', STARTKEY => '', ENDKEY => ''} 2023-07-24 23:10:38,623 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 812558f4905722312bd5dd9f296ef5e3 2023-07-24 23:10:38,623 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1690240236537.812558f4905722312bd5dd9f296ef5e3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:38,623 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 812558f4905722312bd5dd9f296ef5e3 2023-07-24 23:10:38,623 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 812558f4905722312bd5dd9f296ef5e3 2023-07-24 23:10:38,625 INFO [StoreOpener-812558f4905722312bd5dd9f296ef5e3-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 812558f4905722312bd5dd9f296ef5e3 2023-07-24 23:10:38,626 DEBUG [StoreOpener-812558f4905722312bd5dd9f296ef5e3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/GrouptestMultiTableMoveA/812558f4905722312bd5dd9f296ef5e3/f 2023-07-24 23:10:38,626 DEBUG [StoreOpener-812558f4905722312bd5dd9f296ef5e3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/GrouptestMultiTableMoveA/812558f4905722312bd5dd9f296ef5e3/f 2023-07-24 23:10:38,627 INFO [StoreOpener-812558f4905722312bd5dd9f296ef5e3-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 812558f4905722312bd5dd9f296ef5e3 columnFamilyName f 2023-07-24 23:10:38,627 INFO [StoreOpener-812558f4905722312bd5dd9f296ef5e3-1] regionserver.HStore(310): Store=812558f4905722312bd5dd9f296ef5e3/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:38,628 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/GrouptestMultiTableMoveA/812558f4905722312bd5dd9f296ef5e3 2023-07-24 23:10:38,630 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/GrouptestMultiTableMoveA/812558f4905722312bd5dd9f296ef5e3 2023-07-24 23:10:38,633 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 812558f4905722312bd5dd9f296ef5e3 2023-07-24 23:10:38,634 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 812558f4905722312bd5dd9f296ef5e3; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10598205760, jitterRate=-0.012965172529220581}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 23:10:38,634 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 812558f4905722312bd5dd9f296ef5e3: 2023-07-24 23:10:38,635 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1690240236537.812558f4905722312bd5dd9f296ef5e3., pid=104, masterSystemTime=1690240238618 2023-07-24 23:10:38,636 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1690240236537.812558f4905722312bd5dd9f296ef5e3. 2023-07-24 23:10:38,637 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1690240236537.812558f4905722312bd5dd9f296ef5e3. 2023-07-24 23:10:38,637 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1690240237160.46b7b0fc25cd923ee68b903e8b4da211. 2023-07-24 23:10:38,637 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=812558f4905722312bd5dd9f296ef5e3, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33649,1690240221185 2023-07-24 23:10:38,637 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 46b7b0fc25cd923ee68b903e8b4da211, NAME => 'GrouptestMultiTableMoveB,,1690240237160.46b7b0fc25cd923ee68b903e8b4da211.', STARTKEY => '', ENDKEY => ''} 2023-07-24 23:10:38,637 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1690240236537.812558f4905722312bd5dd9f296ef5e3.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690240238637"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690240238637"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690240238637"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690240238637"}]},"ts":"1690240238637"} 2023-07-24 23:10:38,637 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 46b7b0fc25cd923ee68b903e8b4da211 2023-07-24 23:10:38,637 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1690240237160.46b7b0fc25cd923ee68b903e8b4da211.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:38,637 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 46b7b0fc25cd923ee68b903e8b4da211 2023-07-24 23:10:38,637 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 46b7b0fc25cd923ee68b903e8b4da211 2023-07-24 23:10:38,642 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=104, resume processing ppid=101 2023-07-24 23:10:38,642 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=104, ppid=101, state=SUCCESS; OpenRegionProcedure 812558f4905722312bd5dd9f296ef5e3, server=jenkins-hbase4.apache.org,33649,1690240221185 in 463 msec 2023-07-24 23:10:38,644 INFO [StoreOpener-46b7b0fc25cd923ee68b903e8b4da211-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 46b7b0fc25cd923ee68b903e8b4da211 2023-07-24 23:10:38,645 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=101, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=812558f4905722312bd5dd9f296ef5e3, REOPEN/MOVE in 814 msec 2023-07-24 23:10:38,647 DEBUG [StoreOpener-46b7b0fc25cd923ee68b903e8b4da211-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/GrouptestMultiTableMoveB/46b7b0fc25cd923ee68b903e8b4da211/f 2023-07-24 23:10:38,647 DEBUG [StoreOpener-46b7b0fc25cd923ee68b903e8b4da211-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/GrouptestMultiTableMoveB/46b7b0fc25cd923ee68b903e8b4da211/f 2023-07-24 23:10:38,648 INFO [StoreOpener-46b7b0fc25cd923ee68b903e8b4da211-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 46b7b0fc25cd923ee68b903e8b4da211 columnFamilyName f 2023-07-24 23:10:38,649 INFO [StoreOpener-46b7b0fc25cd923ee68b903e8b4da211-1] regionserver.HStore(310): Store=46b7b0fc25cd923ee68b903e8b4da211/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:38,649 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/GrouptestMultiTableMoveB/46b7b0fc25cd923ee68b903e8b4da211 2023-07-24 23:10:38,651 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/GrouptestMultiTableMoveB/46b7b0fc25cd923ee68b903e8b4da211 2023-07-24 23:10:38,654 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 46b7b0fc25cd923ee68b903e8b4da211 2023-07-24 23:10:38,655 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 46b7b0fc25cd923ee68b903e8b4da211; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9552202080, jitterRate=-0.11038185656070709}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 23:10:38,655 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 46b7b0fc25cd923ee68b903e8b4da211: 2023-07-24 23:10:38,656 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1690240237160.46b7b0fc25cd923ee68b903e8b4da211., pid=105, masterSystemTime=1690240238618 2023-07-24 23:10:38,658 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1690240237160.46b7b0fc25cd923ee68b903e8b4da211. 2023-07-24 23:10:38,658 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1690240237160.46b7b0fc25cd923ee68b903e8b4da211. 2023-07-24 23:10:38,658 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=46b7b0fc25cd923ee68b903e8b4da211, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33649,1690240221185 2023-07-24 23:10:38,659 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1690240237160.46b7b0fc25cd923ee68b903e8b4da211.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690240238658"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690240238658"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690240238658"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690240238658"}]},"ts":"1690240238658"} 2023-07-24 23:10:38,663 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=105, resume processing ppid=100 2023-07-24 23:10:38,663 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=105, ppid=100, state=SUCCESS; OpenRegionProcedure 46b7b0fc25cd923ee68b903e8b4da211, server=jenkins-hbase4.apache.org,33649,1690240221185 in 483 msec 2023-07-24 23:10:38,664 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=100, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=46b7b0fc25cd923ee68b903e8b4da211, REOPEN/MOVE in 838 msec 2023-07-24 23:10:38,830 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] procedure.ProcedureSyncWait(216): waitFor pid=100 2023-07-24 23:10:38,830 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(369): All regions from table(s) [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] moved to target group Group_testMultiTableMove_102387135. 2023-07-24 23:10:38,830 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 23:10:38,835 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:38,835 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:38,838 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-24 23:10:38,838 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 23:10:38,839 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-24 23:10:38,839 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 23:10:38,840 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 23:10:38,840 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:38,841 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_102387135 2023-07-24 23:10:38,841 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:38,843 INFO [Listener at localhost/39785] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveA 2023-07-24 23:10:38,843 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveA 2023-07-24 23:10:38,844 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] procedure2.ProcedureExecutor(1029): Stored pid=106, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveA 2023-07-24 23:10:38,847 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-24 23:10:38,847 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690240238847"}]},"ts":"1690240238847"} 2023-07-24 23:10:38,849 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLING in hbase:meta 2023-07-24 23:10:38,850 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveA to state=DISABLING 2023-07-24 23:10:38,854 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=107, ppid=106, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=812558f4905722312bd5dd9f296ef5e3, UNASSIGN}] 2023-07-24 23:10:38,855 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=107, ppid=106, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=812558f4905722312bd5dd9f296ef5e3, UNASSIGN 2023-07-24 23:10:38,856 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=107 updating hbase:meta row=812558f4905722312bd5dd9f296ef5e3, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33649,1690240221185 2023-07-24 23:10:38,856 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1690240236537.812558f4905722312bd5dd9f296ef5e3.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690240238856"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240238856"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240238856"}]},"ts":"1690240238856"} 2023-07-24 23:10:38,858 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=108, ppid=107, state=RUNNABLE; CloseRegionProcedure 812558f4905722312bd5dd9f296ef5e3, server=jenkins-hbase4.apache.org,33649,1690240221185}] 2023-07-24 23:10:38,948 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-24 23:10:39,010 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 812558f4905722312bd5dd9f296ef5e3 2023-07-24 23:10:39,011 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 812558f4905722312bd5dd9f296ef5e3, disabling compactions & flushes 2023-07-24 23:10:39,011 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1690240236537.812558f4905722312bd5dd9f296ef5e3. 2023-07-24 23:10:39,011 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1690240236537.812558f4905722312bd5dd9f296ef5e3. 2023-07-24 23:10:39,011 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1690240236537.812558f4905722312bd5dd9f296ef5e3. after waiting 0 ms 2023-07-24 23:10:39,011 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1690240236537.812558f4905722312bd5dd9f296ef5e3. 2023-07-24 23:10:39,016 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/GrouptestMultiTableMoveA/812558f4905722312bd5dd9f296ef5e3/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 23:10:39,016 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1690240236537.812558f4905722312bd5dd9f296ef5e3. 2023-07-24 23:10:39,016 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 812558f4905722312bd5dd9f296ef5e3: 2023-07-24 23:10:39,018 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 812558f4905722312bd5dd9f296ef5e3 2023-07-24 23:10:39,018 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=107 updating hbase:meta row=812558f4905722312bd5dd9f296ef5e3, regionState=CLOSED 2023-07-24 23:10:39,019 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1690240236537.812558f4905722312bd5dd9f296ef5e3.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690240239018"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240239018"}]},"ts":"1690240239018"} 2023-07-24 23:10:39,022 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=108, resume processing ppid=107 2023-07-24 23:10:39,022 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=108, ppid=107, state=SUCCESS; CloseRegionProcedure 812558f4905722312bd5dd9f296ef5e3, server=jenkins-hbase4.apache.org,33649,1690240221185 in 162 msec 2023-07-24 23:10:39,024 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=107, resume processing ppid=106 2023-07-24 23:10:39,024 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=107, ppid=106, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=812558f4905722312bd5dd9f296ef5e3, UNASSIGN in 171 msec 2023-07-24 23:10:39,025 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690240239025"}]},"ts":"1690240239025"} 2023-07-24 23:10:39,027 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLED in hbase:meta 2023-07-24 23:10:39,029 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveA to state=DISABLED 2023-07-24 23:10:39,030 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=106, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveA in 186 msec 2023-07-24 23:10:39,149 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-24 23:10:39,150 INFO [Listener at localhost/39785] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveA, procId: 106 completed 2023-07-24 23:10:39,150 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveA 2023-07-24 23:10:39,151 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] procedure2.ProcedureExecutor(1029): Stored pid=109, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-24 23:10:39,153 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=109, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-24 23:10:39,153 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveA' from rsgroup 'Group_testMultiTableMove_102387135' 2023-07-24 23:10:39,154 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=109, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-24 23:10:39,156 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:39,157 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_102387135 2023-07-24 23:10:39,157 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:39,157 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 23:10:39,158 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/GrouptestMultiTableMoveA/812558f4905722312bd5dd9f296ef5e3 2023-07-24 23:10:39,160 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-24 23:10:39,160 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/GrouptestMultiTableMoveA/812558f4905722312bd5dd9f296ef5e3/f, FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/GrouptestMultiTableMoveA/812558f4905722312bd5dd9f296ef5e3/recovered.edits] 2023-07-24 23:10:39,165 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/GrouptestMultiTableMoveA/812558f4905722312bd5dd9f296ef5e3/recovered.edits/7.seqid to hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/archive/data/default/GrouptestMultiTableMoveA/812558f4905722312bd5dd9f296ef5e3/recovered.edits/7.seqid 2023-07-24 23:10:39,165 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/GrouptestMultiTableMoveA/812558f4905722312bd5dd9f296ef5e3 2023-07-24 23:10:39,165 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-24 23:10:39,167 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=109, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-24 23:10:39,169 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveA from hbase:meta 2023-07-24 23:10:39,171 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveA' descriptor. 2023-07-24 23:10:39,172 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=109, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-24 23:10:39,172 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveA' from region states. 2023-07-24 23:10:39,172 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA,,1690240236537.812558f4905722312bd5dd9f296ef5e3.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690240239172"}]},"ts":"9223372036854775807"} 2023-07-24 23:10:39,173 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-24 23:10:39,174 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 812558f4905722312bd5dd9f296ef5e3, NAME => 'GrouptestMultiTableMoveA,,1690240236537.812558f4905722312bd5dd9f296ef5e3.', STARTKEY => '', ENDKEY => ''}] 2023-07-24 23:10:39,174 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveA' as deleted. 2023-07-24 23:10:39,174 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690240239174"}]},"ts":"9223372036854775807"} 2023-07-24 23:10:39,175 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveA state from META 2023-07-24 23:10:39,178 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=109, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-24 23:10:39,179 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=109, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveA in 28 msec 2023-07-24 23:10:39,261 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-24 23:10:39,261 INFO [Listener at localhost/39785] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveA, procId: 109 completed 2023-07-24 23:10:39,262 INFO [Listener at localhost/39785] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveB 2023-07-24 23:10:39,262 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveB 2023-07-24 23:10:39,263 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] procedure2.ProcedureExecutor(1029): Stored pid=110, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveB 2023-07-24 23:10:39,265 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-24 23:10:39,266 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690240239266"}]},"ts":"1690240239266"} 2023-07-24 23:10:39,267 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLING in hbase:meta 2023-07-24 23:10:39,269 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveB to state=DISABLING 2023-07-24 23:10:39,269 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=111, ppid=110, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=46b7b0fc25cd923ee68b903e8b4da211, UNASSIGN}] 2023-07-24 23:10:39,271 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=111, ppid=110, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=46b7b0fc25cd923ee68b903e8b4da211, UNASSIGN 2023-07-24 23:10:39,271 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=46b7b0fc25cd923ee68b903e8b4da211, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33649,1690240221185 2023-07-24 23:10:39,271 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1690240237160.46b7b0fc25cd923ee68b903e8b4da211.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690240239271"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240239271"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240239271"}]},"ts":"1690240239271"} 2023-07-24 23:10:39,272 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=112, ppid=111, state=RUNNABLE; CloseRegionProcedure 46b7b0fc25cd923ee68b903e8b4da211, server=jenkins-hbase4.apache.org,33649,1690240221185}] 2023-07-24 23:10:39,367 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-24 23:10:39,424 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 46b7b0fc25cd923ee68b903e8b4da211 2023-07-24 23:10:39,425 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 46b7b0fc25cd923ee68b903e8b4da211, disabling compactions & flushes 2023-07-24 23:10:39,426 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1690240237160.46b7b0fc25cd923ee68b903e8b4da211. 2023-07-24 23:10:39,426 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1690240237160.46b7b0fc25cd923ee68b903e8b4da211. 2023-07-24 23:10:39,426 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1690240237160.46b7b0fc25cd923ee68b903e8b4da211. after waiting 0 ms 2023-07-24 23:10:39,426 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1690240237160.46b7b0fc25cd923ee68b903e8b4da211. 2023-07-24 23:10:39,429 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/GrouptestMultiTableMoveB/46b7b0fc25cd923ee68b903e8b4da211/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 23:10:39,430 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1690240237160.46b7b0fc25cd923ee68b903e8b4da211. 2023-07-24 23:10:39,430 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 46b7b0fc25cd923ee68b903e8b4da211: 2023-07-24 23:10:39,431 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-24 23:10:39,432 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 46b7b0fc25cd923ee68b903e8b4da211 2023-07-24 23:10:39,432 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=46b7b0fc25cd923ee68b903e8b4da211, regionState=CLOSED 2023-07-24 23:10:39,432 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1690240237160.46b7b0fc25cd923ee68b903e8b4da211.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690240239432"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240239432"}]},"ts":"1690240239432"} 2023-07-24 23:10:39,436 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=112, resume processing ppid=111 2023-07-24 23:10:39,436 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=112, ppid=111, state=SUCCESS; CloseRegionProcedure 46b7b0fc25cd923ee68b903e8b4da211, server=jenkins-hbase4.apache.org,33649,1690240221185 in 162 msec 2023-07-24 23:10:39,438 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=111, resume processing ppid=110 2023-07-24 23:10:39,438 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=111, ppid=110, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=46b7b0fc25cd923ee68b903e8b4da211, UNASSIGN in 167 msec 2023-07-24 23:10:39,439 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690240239439"}]},"ts":"1690240239439"} 2023-07-24 23:10:39,440 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLED in hbase:meta 2023-07-24 23:10:39,442 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveB to state=DISABLED 2023-07-24 23:10:39,444 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=110, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveB in 181 msec 2023-07-24 23:10:39,568 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-24 23:10:39,569 INFO [Listener at localhost/39785] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveB, procId: 110 completed 2023-07-24 23:10:39,571 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveB 2023-07-24 23:10:39,573 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] procedure2.ProcedureExecutor(1029): Stored pid=113, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-24 23:10:39,576 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=113, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-24 23:10:39,576 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveB' from rsgroup 'Group_testMultiTableMove_102387135' 2023-07-24 23:10:39,577 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=113, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-24 23:10:39,580 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:39,581 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_102387135 2023-07-24 23:10:39,583 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/GrouptestMultiTableMoveB/46b7b0fc25cd923ee68b903e8b4da211 2023-07-24 23:10:39,583 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:39,583 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 23:10:39,585 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/GrouptestMultiTableMoveB/46b7b0fc25cd923ee68b903e8b4da211/f, FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/GrouptestMultiTableMoveB/46b7b0fc25cd923ee68b903e8b4da211/recovered.edits] 2023-07-24 23:10:39,591 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-24 23:10:39,599 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/GrouptestMultiTableMoveB/46b7b0fc25cd923ee68b903e8b4da211/recovered.edits/7.seqid to hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/archive/data/default/GrouptestMultiTableMoveB/46b7b0fc25cd923ee68b903e8b4da211/recovered.edits/7.seqid 2023-07-24 23:10:39,600 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/GrouptestMultiTableMoveB/46b7b0fc25cd923ee68b903e8b4da211 2023-07-24 23:10:39,600 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-24 23:10:39,603 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=113, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-24 23:10:39,605 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveB from hbase:meta 2023-07-24 23:10:39,617 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveB' descriptor. 2023-07-24 23:10:39,618 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=113, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-24 23:10:39,618 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveB' from region states. 2023-07-24 23:10:39,618 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB,,1690240237160.46b7b0fc25cd923ee68b903e8b4da211.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690240239618"}]},"ts":"9223372036854775807"} 2023-07-24 23:10:39,620 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-24 23:10:39,620 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 46b7b0fc25cd923ee68b903e8b4da211, NAME => 'GrouptestMultiTableMoveB,,1690240237160.46b7b0fc25cd923ee68b903e8b4da211.', STARTKEY => '', ENDKEY => ''}] 2023-07-24 23:10:39,620 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveB' as deleted. 2023-07-24 23:10:39,620 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690240239620"}]},"ts":"9223372036854775807"} 2023-07-24 23:10:39,622 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveB state from META 2023-07-24 23:10:39,624 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=113, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-24 23:10:39,626 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=113, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveB in 53 msec 2023-07-24 23:10:39,696 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-24 23:10:39,696 INFO [Listener at localhost/39785] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveB, procId: 113 completed 2023-07-24 23:10:39,699 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:39,700 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:39,701 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 23:10:39,701 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 23:10:39,701 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 23:10:39,702 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33649] to rsgroup default 2023-07-24 23:10:39,704 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:39,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_102387135 2023-07-24 23:10:39,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:39,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 23:10:39,713 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testMultiTableMove_102387135, current retry=0 2023-07-24 23:10:39,713 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33649,1690240221185] are moved back to Group_testMultiTableMove_102387135 2023-07-24 23:10:39,713 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testMultiTableMove_102387135 => default 2023-07-24 23:10:39,713 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 23:10:39,714 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testMultiTableMove_102387135 2023-07-24 23:10:39,718 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:39,718 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:39,719 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-24 23:10:39,720 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 23:10:39,721 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 23:10:39,721 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 23:10:39,721 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 23:10:39,722 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 23:10:39,722 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 23:10:39,723 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 23:10:39,726 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:39,726 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 23:10:39,727 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 23:10:39,730 INFO [Listener at localhost/39785] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 23:10:39,731 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 23:10:39,733 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:39,733 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:39,734 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 23:10:39,737 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 23:10:39,742 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:39,742 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:39,746 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42959] to rsgroup master 2023-07-24 23:10:39,746 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 23:10:39,746 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.CallRunner(144): callId: 510 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34864 deadline: 1690241439746, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. 2023-07-24 23:10:39,747 WARN [Listener at localhost/39785] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 23:10:39,749 INFO [Listener at localhost/39785] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:39,749 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:39,750 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:39,750 INFO [Listener at localhost/39785] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33649, jenkins-hbase4.apache.org:36981, jenkins-hbase4.apache.org:42429, jenkins-hbase4.apache.org:46215], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 23:10:39,751 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 23:10:39,751 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:39,780 INFO [Listener at localhost/39785] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=510 (was 511), OpenFileDescriptor=801 (was 809), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=372 (was 396), ProcessCount=177 (was 177), AvailableMemoryMB=5871 (was 6036) 2023-07-24 23:10:39,780 WARN [Listener at localhost/39785] hbase.ResourceChecker(130): Thread=510 is superior to 500 2023-07-24 23:10:39,802 INFO [Listener at localhost/39785] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=510, OpenFileDescriptor=801, MaxFileDescriptor=60000, SystemLoadAverage=372, ProcessCount=177, AvailableMemoryMB=5870 2023-07-24 23:10:39,802 WARN [Listener at localhost/39785] hbase.ResourceChecker(130): Thread=510 is superior to 500 2023-07-24 23:10:39,802 INFO [Listener at localhost/39785] rsgroup.TestRSGroupsBase(132): testRenameRSGroupConstraints 2023-07-24 23:10:39,806 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:39,806 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:39,808 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 23:10:39,808 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 23:10:39,808 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 23:10:39,809 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 23:10:39,809 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 23:10:39,809 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 23:10:39,813 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:39,813 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 23:10:39,815 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 23:10:39,818 INFO [Listener at localhost/39785] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 23:10:39,819 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 23:10:39,821 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:39,822 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:39,823 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 23:10:39,825 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 23:10:39,829 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:39,829 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:39,832 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42959] to rsgroup master 2023-07-24 23:10:39,832 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 23:10:39,832 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.CallRunner(144): callId: 538 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34864 deadline: 1690241439832, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. 2023-07-24 23:10:39,833 WARN [Listener at localhost/39785] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 23:10:39,834 INFO [Listener at localhost/39785] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:39,835 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:39,835 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:39,835 INFO [Listener at localhost/39785] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33649, jenkins-hbase4.apache.org:36981, jenkins-hbase4.apache.org:42429, jenkins-hbase4.apache.org:46215], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 23:10:39,836 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 23:10:39,836 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:39,837 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 23:10:39,837 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:39,838 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldGroup 2023-07-24 23:10:39,841 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:39,842 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-24 23:10:39,843 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:39,843 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 23:10:39,847 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 23:10:39,851 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:39,851 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:39,854 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33649, jenkins-hbase4.apache.org:36981] to rsgroup oldGroup 2023-07-24 23:10:39,856 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:39,857 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-24 23:10:39,857 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:39,857 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 23:10:39,860 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-24 23:10:39,860 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33649,1690240221185, jenkins-hbase4.apache.org,36981,1690240220580] are moved back to default 2023-07-24 23:10:39,860 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldGroup 2023-07-24 23:10:39,860 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 23:10:39,864 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:39,864 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:39,866 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-24 23:10:39,867 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:39,867 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-24 23:10:39,867 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:39,868 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 23:10:39,868 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:39,869 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup anotherRSGroup 2023-07-24 23:10:39,872 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:39,872 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-24 23:10:39,874 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-24 23:10:39,874 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:39,875 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-24 23:10:39,876 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 23:10:39,879 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:39,880 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:39,883 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42429] to rsgroup anotherRSGroup 2023-07-24 23:10:39,885 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:39,885 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-24 23:10:39,885 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-24 23:10:39,886 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:39,886 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-24 23:10:39,888 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-24 23:10:39,889 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,42429,1690240220974] are moved back to default 2023-07-24 23:10:39,889 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(438): Move servers done: default => anotherRSGroup 2023-07-24 23:10:39,889 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 23:10:39,896 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:39,896 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:39,901 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-24 23:10:39,901 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:39,902 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-24 23:10:39,902 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:39,908 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from nonExistingRSGroup to newRSGroup1 2023-07-24 23:10:39,908 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:407) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 23:10:39,908 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.CallRunner(144): callId: 572 service: MasterService methodName: ExecMasterService size: 113 connection: 172.31.14.131:34864 deadline: 1690241439907, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist 2023-07-24 23:10:39,910 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to anotherRSGroup 2023-07-24 23:10:39,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 23:10:39,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.CallRunner(144): callId: 574 service: MasterService methodName: ExecMasterService size: 106 connection: 172.31.14.131:34864 deadline: 1690241439910, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup 2023-07-24 23:10:39,911 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from default to newRSGroup2 2023-07-24 23:10:39,911 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:403) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 23:10:39,912 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.CallRunner(144): callId: 576 service: MasterService methodName: ExecMasterService size: 102 connection: 172.31.14.131:34864 deadline: 1690241439911, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup 2023-07-24 23:10:39,912 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to default 2023-07-24 23:10:39,913 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 23:10:39,913 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.CallRunner(144): callId: 578 service: MasterService methodName: ExecMasterService size: 99 connection: 172.31.14.131:34864 deadline: 1690241439912, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default 2023-07-24 23:10:39,917 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:39,917 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:39,919 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 23:10:39,919 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 23:10:39,919 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 23:10:39,920 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42429] to rsgroup default 2023-07-24 23:10:39,922 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:39,923 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-24 23:10:39,923 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-24 23:10:39,924 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:39,924 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-24 23:10:39,927 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group anotherRSGroup, current retry=0 2023-07-24 23:10:39,927 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,42429,1690240220974] are moved back to anotherRSGroup 2023-07-24 23:10:39,927 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(438): Move servers done: anotherRSGroup => default 2023-07-24 23:10:39,927 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 23:10:39,928 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup anotherRSGroup 2023-07-24 23:10:39,931 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:39,931 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-24 23:10:39,932 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:39,932 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-24 23:10:39,936 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 23:10:39,937 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 23:10:39,937 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 23:10:39,937 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 23:10:39,937 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33649, jenkins-hbase4.apache.org:36981] to rsgroup default 2023-07-24 23:10:39,939 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:39,939 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-24 23:10:39,940 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:39,940 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 23:10:39,941 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group oldGroup, current retry=0 2023-07-24 23:10:39,941 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33649,1690240221185, jenkins-hbase4.apache.org,36981,1690240220580] are moved back to oldGroup 2023-07-24 23:10:39,942 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(438): Move servers done: oldGroup => default 2023-07-24 23:10:39,942 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 23:10:39,942 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup oldGroup 2023-07-24 23:10:39,946 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:39,946 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:39,946 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-24 23:10:39,949 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 23:10:39,950 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 23:10:39,950 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 23:10:39,950 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 23:10:39,950 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 23:10:39,950 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 23:10:39,951 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 23:10:39,954 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:39,954 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 23:10:39,955 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 23:10:39,958 INFO [Listener at localhost/39785] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 23:10:39,958 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 23:10:39,960 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:39,960 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:39,961 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 23:10:39,962 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 23:10:39,965 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:39,965 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:39,967 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42959] to rsgroup master 2023-07-24 23:10:39,967 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 23:10:39,967 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.CallRunner(144): callId: 614 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34864 deadline: 1690241439967, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. 2023-07-24 23:10:39,967 WARN [Listener at localhost/39785] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 23:10:39,969 INFO [Listener at localhost/39785] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:39,970 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:39,970 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:39,970 INFO [Listener at localhost/39785] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33649, jenkins-hbase4.apache.org:36981, jenkins-hbase4.apache.org:42429, jenkins-hbase4.apache.org:46215], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 23:10:39,970 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 23:10:39,970 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:39,987 INFO [Listener at localhost/39785] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=514 (was 510) Potentially hanging thread: hconnection-0x63581179-shared-pool-19 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63581179-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63581179-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63581179-shared-pool-20 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=801 (was 801), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=372 (was 372), ProcessCount=177 (was 177), AvailableMemoryMB=5873 (was 5870) - AvailableMemoryMB LEAK? - 2023-07-24 23:10:39,987 WARN [Listener at localhost/39785] hbase.ResourceChecker(130): Thread=514 is superior to 500 2023-07-24 23:10:40,003 INFO [Listener at localhost/39785] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=514, OpenFileDescriptor=801, MaxFileDescriptor=60000, SystemLoadAverage=372, ProcessCount=177, AvailableMemoryMB=5873 2023-07-24 23:10:40,003 WARN [Listener at localhost/39785] hbase.ResourceChecker(130): Thread=514 is superior to 500 2023-07-24 23:10:40,003 INFO [Listener at localhost/39785] rsgroup.TestRSGroupsBase(132): testRenameRSGroup 2023-07-24 23:10:40,007 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:40,007 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:40,008 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 23:10:40,008 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 23:10:40,008 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 23:10:40,009 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 23:10:40,009 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 23:10:40,010 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 23:10:40,013 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:40,013 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 23:10:40,016 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 23:10:40,018 INFO [Listener at localhost/39785] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 23:10:40,019 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 23:10:40,021 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:40,021 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:40,023 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 23:10:40,024 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 23:10:40,027 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:40,027 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:40,030 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42959] to rsgroup master 2023-07-24 23:10:40,030 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 23:10:40,030 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.CallRunner(144): callId: 642 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34864 deadline: 1690241440030, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. 2023-07-24 23:10:40,030 WARN [Listener at localhost/39785] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 23:10:40,032 INFO [Listener at localhost/39785] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:40,033 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:40,033 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:40,033 INFO [Listener at localhost/39785] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33649, jenkins-hbase4.apache.org:36981, jenkins-hbase4.apache.org:42429, jenkins-hbase4.apache.org:46215], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 23:10:40,035 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 23:10:40,035 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:40,036 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 23:10:40,036 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:40,037 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldgroup 2023-07-24 23:10:40,039 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-24 23:10:40,041 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:40,042 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:40,043 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 23:10:40,057 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 23:10:40,068 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:40,068 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:40,071 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33649, jenkins-hbase4.apache.org:36981] to rsgroup oldgroup 2023-07-24 23:10:40,074 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-24 23:10:40,075 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:40,075 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:40,076 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 23:10:40,077 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-24 23:10:40,077 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33649,1690240221185, jenkins-hbase4.apache.org,36981,1690240220580] are moved back to default 2023-07-24 23:10:40,077 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldgroup 2023-07-24 23:10:40,078 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 23:10:40,081 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:40,081 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:40,084 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-24 23:10:40,084 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:40,086 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 23:10:40,088 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] procedure2.ProcedureExecutor(1029): Stored pid=114, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=testRename 2023-07-24 23:10:40,090 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 23:10:40,090 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "testRename" procId is: 114 2023-07-24 23:10:40,092 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-24 23:10:40,093 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-24 23:10:40,094 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:40,094 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:40,095 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 23:10:40,098 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 23:10:40,100 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/testRename/e5172a504c1b9d74aaf33c65006a1502 2023-07-24 23:10:40,100 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/testRename/e5172a504c1b9d74aaf33c65006a1502 empty. 2023-07-24 23:10:40,101 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/testRename/e5172a504c1b9d74aaf33c65006a1502 2023-07-24 23:10:40,101 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived testRename regions 2023-07-24 23:10:40,127 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/testRename/.tabledesc/.tableinfo.0000000001 2023-07-24 23:10:40,131 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(7675): creating {ENCODED => e5172a504c1b9d74aaf33c65006a1502, NAME => 'testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp 2023-07-24 23:10:40,174 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(866): Instantiated testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:40,174 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1604): Closing e5172a504c1b9d74aaf33c65006a1502, disabling compactions & flushes 2023-07-24 23:10:40,174 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1626): Closing region testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502. 2023-07-24 23:10:40,174 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502. 2023-07-24 23:10:40,174 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502. after waiting 0 ms 2023-07-24 23:10:40,174 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502. 2023-07-24 23:10:40,174 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1838): Closed testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502. 2023-07-24 23:10:40,174 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1558): Region close journal for e5172a504c1b9d74aaf33c65006a1502: 2023-07-24 23:10:40,177 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 23:10:40,178 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690240240178"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240240178"}]},"ts":"1690240240178"} 2023-07-24 23:10:40,180 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 23:10:40,182 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 23:10:40,182 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690240240182"}]},"ts":"1690240240182"} 2023-07-24 23:10:40,183 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLING in hbase:meta 2023-07-24 23:10:40,189 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 23:10:40,189 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 23:10:40,189 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 23:10:40,189 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 23:10:40,189 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=e5172a504c1b9d74aaf33c65006a1502, ASSIGN}] 2023-07-24 23:10:40,191 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=e5172a504c1b9d74aaf33c65006a1502, ASSIGN 2023-07-24 23:10:40,193 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=e5172a504c1b9d74aaf33c65006a1502, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42429,1690240220974; forceNewPlan=false, retain=false 2023-07-24 23:10:40,193 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-24 23:10:40,343 INFO [jenkins-hbase4:42959] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 23:10:40,344 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=e5172a504c1b9d74aaf33c65006a1502, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42429,1690240220974 2023-07-24 23:10:40,344 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690240240344"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240240344"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240240344"}]},"ts":"1690240240344"} 2023-07-24 23:10:40,347 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=116, ppid=115, state=RUNNABLE; OpenRegionProcedure e5172a504c1b9d74aaf33c65006a1502, server=jenkins-hbase4.apache.org,42429,1690240220974}] 2023-07-24 23:10:40,394 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-24 23:10:40,502 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502. 2023-07-24 23:10:40,503 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e5172a504c1b9d74aaf33c65006a1502, NAME => 'testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502.', STARTKEY => '', ENDKEY => ''} 2023-07-24 23:10:40,503 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename e5172a504c1b9d74aaf33c65006a1502 2023-07-24 23:10:40,503 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:40,503 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e5172a504c1b9d74aaf33c65006a1502 2023-07-24 23:10:40,503 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e5172a504c1b9d74aaf33c65006a1502 2023-07-24 23:10:40,505 INFO [StoreOpener-e5172a504c1b9d74aaf33c65006a1502-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region e5172a504c1b9d74aaf33c65006a1502 2023-07-24 23:10:40,506 DEBUG [StoreOpener-e5172a504c1b9d74aaf33c65006a1502-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/testRename/e5172a504c1b9d74aaf33c65006a1502/tr 2023-07-24 23:10:40,506 DEBUG [StoreOpener-e5172a504c1b9d74aaf33c65006a1502-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/testRename/e5172a504c1b9d74aaf33c65006a1502/tr 2023-07-24 23:10:40,507 INFO [StoreOpener-e5172a504c1b9d74aaf33c65006a1502-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e5172a504c1b9d74aaf33c65006a1502 columnFamilyName tr 2023-07-24 23:10:40,507 INFO [StoreOpener-e5172a504c1b9d74aaf33c65006a1502-1] regionserver.HStore(310): Store=e5172a504c1b9d74aaf33c65006a1502/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:40,508 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/testRename/e5172a504c1b9d74aaf33c65006a1502 2023-07-24 23:10:40,509 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/testRename/e5172a504c1b9d74aaf33c65006a1502 2023-07-24 23:10:40,513 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e5172a504c1b9d74aaf33c65006a1502 2023-07-24 23:10:40,519 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/testRename/e5172a504c1b9d74aaf33c65006a1502/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 23:10:40,520 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e5172a504c1b9d74aaf33c65006a1502; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9794950400, jitterRate=-0.08777415752410889}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 23:10:40,520 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e5172a504c1b9d74aaf33c65006a1502: 2023-07-24 23:10:40,521 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502., pid=116, masterSystemTime=1690240240499 2023-07-24 23:10:40,522 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502. 2023-07-24 23:10:40,522 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502. 2023-07-24 23:10:40,523 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=e5172a504c1b9d74aaf33c65006a1502, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42429,1690240220974 2023-07-24 23:10:40,523 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690240240523"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690240240523"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690240240523"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690240240523"}]},"ts":"1690240240523"} 2023-07-24 23:10:40,526 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=116, resume processing ppid=115 2023-07-24 23:10:40,526 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=116, ppid=115, state=SUCCESS; OpenRegionProcedure e5172a504c1b9d74aaf33c65006a1502, server=jenkins-hbase4.apache.org,42429,1690240220974 in 179 msec 2023-07-24 23:10:40,528 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=115, resume processing ppid=114 2023-07-24 23:10:40,528 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=115, ppid=114, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=e5172a504c1b9d74aaf33c65006a1502, ASSIGN in 337 msec 2023-07-24 23:10:40,529 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 23:10:40,529 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690240240529"}]},"ts":"1690240240529"} 2023-07-24 23:10:40,530 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLED in hbase:meta 2023-07-24 23:10:40,532 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 23:10:40,534 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=114, state=SUCCESS; CreateTableProcedure table=testRename in 446 msec 2023-07-24 23:10:40,696 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-24 23:10:40,696 INFO [Listener at localhost/39785] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:testRename, procId: 114 completed 2023-07-24 23:10:40,696 DEBUG [Listener at localhost/39785] hbase.HBaseTestingUtility(3430): Waiting until all regions of table testRename get assigned. Timeout = 60000ms 2023-07-24 23:10:40,696 INFO [Listener at localhost/39785] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:40,700 INFO [Listener at localhost/39785] hbase.HBaseTestingUtility(3484): All regions for table testRename assigned to meta. Checking AM states. 2023-07-24 23:10:40,700 INFO [Listener at localhost/39785] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:40,700 INFO [Listener at localhost/39785] hbase.HBaseTestingUtility(3504): All regions for table testRename assigned. 2023-07-24 23:10:40,702 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup oldgroup 2023-07-24 23:10:40,704 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-24 23:10:40,705 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:40,705 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:40,705 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 23:10:40,707 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup oldgroup 2023-07-24 23:10:40,707 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(345): Moving region e5172a504c1b9d74aaf33c65006a1502 to RSGroup oldgroup 2023-07-24 23:10:40,707 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 23:10:40,707 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 23:10:40,707 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 23:10:40,707 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 23:10:40,707 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 23:10:40,708 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] procedure2.ProcedureExecutor(1029): Stored pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=e5172a504c1b9d74aaf33c65006a1502, REOPEN/MOVE 2023-07-24 23:10:40,708 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group oldgroup, current retry=0 2023-07-24 23:10:40,708 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=e5172a504c1b9d74aaf33c65006a1502, REOPEN/MOVE 2023-07-24 23:10:40,709 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=e5172a504c1b9d74aaf33c65006a1502, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42429,1690240220974 2023-07-24 23:10:40,709 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690240240709"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240240709"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240240709"}]},"ts":"1690240240709"} 2023-07-24 23:10:40,710 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=118, ppid=117, state=RUNNABLE; CloseRegionProcedure e5172a504c1b9d74aaf33c65006a1502, server=jenkins-hbase4.apache.org,42429,1690240220974}] 2023-07-24 23:10:40,863 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e5172a504c1b9d74aaf33c65006a1502 2023-07-24 23:10:40,864 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e5172a504c1b9d74aaf33c65006a1502, disabling compactions & flushes 2023-07-24 23:10:40,864 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502. 2023-07-24 23:10:40,864 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502. 2023-07-24 23:10:40,864 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502. after waiting 0 ms 2023-07-24 23:10:40,864 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502. 2023-07-24 23:10:40,868 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/testRename/e5172a504c1b9d74aaf33c65006a1502/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 23:10:40,869 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502. 2023-07-24 23:10:40,869 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e5172a504c1b9d74aaf33c65006a1502: 2023-07-24 23:10:40,869 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding e5172a504c1b9d74aaf33c65006a1502 move to jenkins-hbase4.apache.org,36981,1690240220580 record at close sequenceid=2 2023-07-24 23:10:40,870 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e5172a504c1b9d74aaf33c65006a1502 2023-07-24 23:10:40,871 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=e5172a504c1b9d74aaf33c65006a1502, regionState=CLOSED 2023-07-24 23:10:40,871 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690240240871"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240240871"}]},"ts":"1690240240871"} 2023-07-24 23:10:40,873 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=118, resume processing ppid=117 2023-07-24 23:10:40,874 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=118, ppid=117, state=SUCCESS; CloseRegionProcedure e5172a504c1b9d74aaf33c65006a1502, server=jenkins-hbase4.apache.org,42429,1690240220974 in 162 msec 2023-07-24 23:10:40,874 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=e5172a504c1b9d74aaf33c65006a1502, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,36981,1690240220580; forceNewPlan=false, retain=false 2023-07-24 23:10:41,024 INFO [jenkins-hbase4:42959] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 23:10:41,025 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=e5172a504c1b9d74aaf33c65006a1502, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36981,1690240220580 2023-07-24 23:10:41,025 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690240241025"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240241025"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240241025"}]},"ts":"1690240241025"} 2023-07-24 23:10:41,026 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=119, ppid=117, state=RUNNABLE; OpenRegionProcedure e5172a504c1b9d74aaf33c65006a1502, server=jenkins-hbase4.apache.org,36981,1690240220580}] 2023-07-24 23:10:41,182 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502. 2023-07-24 23:10:41,182 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e5172a504c1b9d74aaf33c65006a1502, NAME => 'testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502.', STARTKEY => '', ENDKEY => ''} 2023-07-24 23:10:41,182 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename e5172a504c1b9d74aaf33c65006a1502 2023-07-24 23:10:41,183 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:41,183 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e5172a504c1b9d74aaf33c65006a1502 2023-07-24 23:10:41,183 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e5172a504c1b9d74aaf33c65006a1502 2023-07-24 23:10:41,184 INFO [StoreOpener-e5172a504c1b9d74aaf33c65006a1502-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region e5172a504c1b9d74aaf33c65006a1502 2023-07-24 23:10:41,185 DEBUG [StoreOpener-e5172a504c1b9d74aaf33c65006a1502-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/testRename/e5172a504c1b9d74aaf33c65006a1502/tr 2023-07-24 23:10:41,185 DEBUG [StoreOpener-e5172a504c1b9d74aaf33c65006a1502-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/testRename/e5172a504c1b9d74aaf33c65006a1502/tr 2023-07-24 23:10:41,186 INFO [StoreOpener-e5172a504c1b9d74aaf33c65006a1502-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e5172a504c1b9d74aaf33c65006a1502 columnFamilyName tr 2023-07-24 23:10:41,186 INFO [StoreOpener-e5172a504c1b9d74aaf33c65006a1502-1] regionserver.HStore(310): Store=e5172a504c1b9d74aaf33c65006a1502/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:41,187 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/testRename/e5172a504c1b9d74aaf33c65006a1502 2023-07-24 23:10:41,188 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/testRename/e5172a504c1b9d74aaf33c65006a1502 2023-07-24 23:10:41,192 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e5172a504c1b9d74aaf33c65006a1502 2023-07-24 23:10:41,193 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e5172a504c1b9d74aaf33c65006a1502; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11894847840, jitterRate=0.10779403150081635}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 23:10:41,193 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e5172a504c1b9d74aaf33c65006a1502: 2023-07-24 23:10:41,193 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502., pid=119, masterSystemTime=1690240241178 2023-07-24 23:10:41,195 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502. 2023-07-24 23:10:41,195 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502. 2023-07-24 23:10:41,196 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=e5172a504c1b9d74aaf33c65006a1502, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,36981,1690240220580 2023-07-24 23:10:41,196 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690240241196"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690240241196"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690240241196"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690240241196"}]},"ts":"1690240241196"} 2023-07-24 23:10:41,199 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=119, resume processing ppid=117 2023-07-24 23:10:41,199 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=119, ppid=117, state=SUCCESS; OpenRegionProcedure e5172a504c1b9d74aaf33c65006a1502, server=jenkins-hbase4.apache.org,36981,1690240220580 in 171 msec 2023-07-24 23:10:41,201 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=117, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=e5172a504c1b9d74aaf33c65006a1502, REOPEN/MOVE in 492 msec 2023-07-24 23:10:41,708 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] procedure.ProcedureSyncWait(216): waitFor pid=117 2023-07-24 23:10:41,708 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group oldgroup. 2023-07-24 23:10:41,709 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 23:10:41,712 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:41,712 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:41,715 INFO [Listener at localhost/39785] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:41,716 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-24 23:10:41,716 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 23:10:41,717 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-24 23:10:41,717 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:41,718 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-24 23:10:41,718 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 23:10:41,719 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 23:10:41,719 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:41,720 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup normal 2023-07-24 23:10:41,722 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-24 23:10:41,722 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-24 23:10:41,724 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:41,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:41,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-24 23:10:41,726 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 23:10:41,730 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:41,730 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:41,733 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42429] to rsgroup normal 2023-07-24 23:10:41,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-24 23:10:41,736 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-24 23:10:41,736 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:41,736 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:41,737 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-24 23:10:41,746 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-24 23:10:41,746 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,42429,1690240220974] are moved back to default 2023-07-24 23:10:41,746 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(438): Move servers done: default => normal 2023-07-24 23:10:41,746 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 23:10:41,749 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:41,749 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:41,751 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-24 23:10:41,751 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:41,753 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 23:10:41,753 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] procedure2.ProcedureExecutor(1029): Stored pid=120, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=unmovedTable 2023-07-24 23:10:41,755 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 23:10:41,755 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "unmovedTable" procId is: 120 2023-07-24 23:10:41,756 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-24 23:10:41,757 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-24 23:10:41,758 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-24 23:10:41,758 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:41,758 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:41,758 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-24 23:10:41,760 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 23:10:41,762 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/unmovedTable/07163ab4ec4541d8899adbf059caab34 2023-07-24 23:10:41,762 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/unmovedTable/07163ab4ec4541d8899adbf059caab34 empty. 2023-07-24 23:10:41,763 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/unmovedTable/07163ab4ec4541d8899adbf059caab34 2023-07-24 23:10:41,763 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived unmovedTable regions 2023-07-24 23:10:41,779 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/unmovedTable/.tabledesc/.tableinfo.0000000001 2023-07-24 23:10:41,781 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(7675): creating {ENCODED => 07163ab4ec4541d8899adbf059caab34, NAME => 'unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp 2023-07-24 23:10:41,791 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(866): Instantiated unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:41,791 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1604): Closing 07163ab4ec4541d8899adbf059caab34, disabling compactions & flushes 2023-07-24 23:10:41,791 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1626): Closing region unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34. 2023-07-24 23:10:41,791 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34. 2023-07-24 23:10:41,791 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34. after waiting 0 ms 2023-07-24 23:10:41,791 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34. 2023-07-24 23:10:41,791 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1838): Closed unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34. 2023-07-24 23:10:41,792 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1558): Region close journal for 07163ab4ec4541d8899adbf059caab34: 2023-07-24 23:10:41,794 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 23:10:41,795 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690240241795"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240241795"}]},"ts":"1690240241795"} 2023-07-24 23:10:41,796 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 23:10:41,797 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 23:10:41,797 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690240241797"}]},"ts":"1690240241797"} 2023-07-24 23:10:41,798 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLING in hbase:meta 2023-07-24 23:10:41,802 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=07163ab4ec4541d8899adbf059caab34, ASSIGN}] 2023-07-24 23:10:41,803 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=07163ab4ec4541d8899adbf059caab34, ASSIGN 2023-07-24 23:10:41,804 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=07163ab4ec4541d8899adbf059caab34, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46215,1690240224735; forceNewPlan=false, retain=false 2023-07-24 23:10:41,857 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-24 23:10:41,955 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=121 updating hbase:meta row=07163ab4ec4541d8899adbf059caab34, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46215,1690240224735 2023-07-24 23:10:41,956 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690240241955"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240241955"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240241955"}]},"ts":"1690240241955"} 2023-07-24 23:10:41,958 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=122, ppid=121, state=RUNNABLE; OpenRegionProcedure 07163ab4ec4541d8899adbf059caab34, server=jenkins-hbase4.apache.org,46215,1690240224735}] 2023-07-24 23:10:42,058 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-24 23:10:42,116 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34. 2023-07-24 23:10:42,116 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 07163ab4ec4541d8899adbf059caab34, NAME => 'unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34.', STARTKEY => '', ENDKEY => ''} 2023-07-24 23:10:42,116 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 07163ab4ec4541d8899adbf059caab34 2023-07-24 23:10:42,116 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:42,116 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 07163ab4ec4541d8899adbf059caab34 2023-07-24 23:10:42,116 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 07163ab4ec4541d8899adbf059caab34 2023-07-24 23:10:42,118 INFO [StoreOpener-07163ab4ec4541d8899adbf059caab34-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 07163ab4ec4541d8899adbf059caab34 2023-07-24 23:10:42,120 DEBUG [StoreOpener-07163ab4ec4541d8899adbf059caab34-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/unmovedTable/07163ab4ec4541d8899adbf059caab34/ut 2023-07-24 23:10:42,121 DEBUG [StoreOpener-07163ab4ec4541d8899adbf059caab34-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/unmovedTable/07163ab4ec4541d8899adbf059caab34/ut 2023-07-24 23:10:42,121 INFO [StoreOpener-07163ab4ec4541d8899adbf059caab34-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 07163ab4ec4541d8899adbf059caab34 columnFamilyName ut 2023-07-24 23:10:42,122 INFO [StoreOpener-07163ab4ec4541d8899adbf059caab34-1] regionserver.HStore(310): Store=07163ab4ec4541d8899adbf059caab34/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:42,122 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/unmovedTable/07163ab4ec4541d8899adbf059caab34 2023-07-24 23:10:42,123 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/unmovedTable/07163ab4ec4541d8899adbf059caab34 2023-07-24 23:10:42,128 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 07163ab4ec4541d8899adbf059caab34 2023-07-24 23:10:42,131 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/unmovedTable/07163ab4ec4541d8899adbf059caab34/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 23:10:42,131 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 07163ab4ec4541d8899adbf059caab34; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11288776000, jitterRate=0.05134919285774231}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 23:10:42,131 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 07163ab4ec4541d8899adbf059caab34: 2023-07-24 23:10:42,132 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34., pid=122, masterSystemTime=1690240242111 2023-07-24 23:10:42,134 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34. 2023-07-24 23:10:42,134 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34. 2023-07-24 23:10:42,135 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=121 updating hbase:meta row=07163ab4ec4541d8899adbf059caab34, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46215,1690240224735 2023-07-24 23:10:42,135 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690240242135"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690240242135"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690240242135"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690240242135"}]},"ts":"1690240242135"} 2023-07-24 23:10:42,145 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=122, resume processing ppid=121 2023-07-24 23:10:42,145 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=122, ppid=121, state=SUCCESS; OpenRegionProcedure 07163ab4ec4541d8899adbf059caab34, server=jenkins-hbase4.apache.org,46215,1690240224735 in 183 msec 2023-07-24 23:10:42,147 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=120 2023-07-24 23:10:42,147 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=120, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=07163ab4ec4541d8899adbf059caab34, ASSIGN in 343 msec 2023-07-24 23:10:42,148 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 23:10:42,148 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690240242148"}]},"ts":"1690240242148"} 2023-07-24 23:10:42,149 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLED in hbase:meta 2023-07-24 23:10:42,153 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 23:10:42,154 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=120, state=SUCCESS; CreateTableProcedure table=unmovedTable in 400 msec 2023-07-24 23:10:42,360 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-24 23:10:42,360 INFO [Listener at localhost/39785] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:unmovedTable, procId: 120 completed 2023-07-24 23:10:42,360 DEBUG [Listener at localhost/39785] hbase.HBaseTestingUtility(3430): Waiting until all regions of table unmovedTable get assigned. Timeout = 60000ms 2023-07-24 23:10:42,360 INFO [Listener at localhost/39785] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:42,364 INFO [Listener at localhost/39785] hbase.HBaseTestingUtility(3484): All regions for table unmovedTable assigned to meta. Checking AM states. 2023-07-24 23:10:42,365 INFO [Listener at localhost/39785] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:42,365 INFO [Listener at localhost/39785] hbase.HBaseTestingUtility(3504): All regions for table unmovedTable assigned. 2023-07-24 23:10:42,367 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup normal 2023-07-24 23:10:42,369 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-24 23:10:42,370 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-24 23:10:42,370 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:42,373 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:42,373 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-24 23:10:42,375 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup normal 2023-07-24 23:10:42,375 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(345): Moving region 07163ab4ec4541d8899adbf059caab34 to RSGroup normal 2023-07-24 23:10:42,376 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] procedure2.ProcedureExecutor(1029): Stored pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=07163ab4ec4541d8899adbf059caab34, REOPEN/MOVE 2023-07-24 23:10:42,376 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group normal, current retry=0 2023-07-24 23:10:42,377 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=07163ab4ec4541d8899adbf059caab34, REOPEN/MOVE 2023-07-24 23:10:42,377 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=07163ab4ec4541d8899adbf059caab34, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46215,1690240224735 2023-07-24 23:10:42,377 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690240242377"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240242377"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240242377"}]},"ts":"1690240242377"} 2023-07-24 23:10:42,379 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=124, ppid=123, state=RUNNABLE; CloseRegionProcedure 07163ab4ec4541d8899adbf059caab34, server=jenkins-hbase4.apache.org,46215,1690240224735}] 2023-07-24 23:10:42,534 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 07163ab4ec4541d8899adbf059caab34 2023-07-24 23:10:42,535 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 07163ab4ec4541d8899adbf059caab34, disabling compactions & flushes 2023-07-24 23:10:42,536 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34. 2023-07-24 23:10:42,536 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34. 2023-07-24 23:10:42,536 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34. after waiting 0 ms 2023-07-24 23:10:42,536 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34. 2023-07-24 23:10:42,540 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/unmovedTable/07163ab4ec4541d8899adbf059caab34/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 23:10:42,541 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34. 2023-07-24 23:10:42,541 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 07163ab4ec4541d8899adbf059caab34: 2023-07-24 23:10:42,541 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 07163ab4ec4541d8899adbf059caab34 move to jenkins-hbase4.apache.org,42429,1690240220974 record at close sequenceid=2 2023-07-24 23:10:42,543 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 07163ab4ec4541d8899adbf059caab34 2023-07-24 23:10:42,544 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=07163ab4ec4541d8899adbf059caab34, regionState=CLOSED 2023-07-24 23:10:42,544 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690240242543"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240242543"}]},"ts":"1690240242543"} 2023-07-24 23:10:42,547 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=124, resume processing ppid=123 2023-07-24 23:10:42,547 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=124, ppid=123, state=SUCCESS; CloseRegionProcedure 07163ab4ec4541d8899adbf059caab34, server=jenkins-hbase4.apache.org,46215,1690240224735 in 166 msec 2023-07-24 23:10:42,548 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=07163ab4ec4541d8899adbf059caab34, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,42429,1690240220974; forceNewPlan=false, retain=false 2023-07-24 23:10:42,699 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=07163ab4ec4541d8899adbf059caab34, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42429,1690240220974 2023-07-24 23:10:42,699 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690240242699"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240242699"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240242699"}]},"ts":"1690240242699"} 2023-07-24 23:10:42,700 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=125, ppid=123, state=RUNNABLE; OpenRegionProcedure 07163ab4ec4541d8899adbf059caab34, server=jenkins-hbase4.apache.org,42429,1690240220974}] 2023-07-24 23:10:42,856 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34. 2023-07-24 23:10:42,856 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 07163ab4ec4541d8899adbf059caab34, NAME => 'unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34.', STARTKEY => '', ENDKEY => ''} 2023-07-24 23:10:42,857 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 07163ab4ec4541d8899adbf059caab34 2023-07-24 23:10:42,857 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:42,857 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 07163ab4ec4541d8899adbf059caab34 2023-07-24 23:10:42,857 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 07163ab4ec4541d8899adbf059caab34 2023-07-24 23:10:42,858 INFO [StoreOpener-07163ab4ec4541d8899adbf059caab34-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 07163ab4ec4541d8899adbf059caab34 2023-07-24 23:10:42,859 DEBUG [StoreOpener-07163ab4ec4541d8899adbf059caab34-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/unmovedTable/07163ab4ec4541d8899adbf059caab34/ut 2023-07-24 23:10:42,860 DEBUG [StoreOpener-07163ab4ec4541d8899adbf059caab34-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/unmovedTable/07163ab4ec4541d8899adbf059caab34/ut 2023-07-24 23:10:42,860 INFO [StoreOpener-07163ab4ec4541d8899adbf059caab34-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 07163ab4ec4541d8899adbf059caab34 columnFamilyName ut 2023-07-24 23:10:42,860 INFO [StoreOpener-07163ab4ec4541d8899adbf059caab34-1] regionserver.HStore(310): Store=07163ab4ec4541d8899adbf059caab34/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:42,861 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/unmovedTable/07163ab4ec4541d8899adbf059caab34 2023-07-24 23:10:42,863 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/unmovedTable/07163ab4ec4541d8899adbf059caab34 2023-07-24 23:10:42,867 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 07163ab4ec4541d8899adbf059caab34 2023-07-24 23:10:42,868 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 07163ab4ec4541d8899adbf059caab34; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9570284000, jitterRate=-0.10869784653186798}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 23:10:42,868 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 07163ab4ec4541d8899adbf059caab34: 2023-07-24 23:10:42,869 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34., pid=125, masterSystemTime=1690240242852 2023-07-24 23:10:42,871 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34. 2023-07-24 23:10:42,871 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34. 2023-07-24 23:10:42,871 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=07163ab4ec4541d8899adbf059caab34, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,42429,1690240220974 2023-07-24 23:10:42,871 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690240242871"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690240242871"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690240242871"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690240242871"}]},"ts":"1690240242871"} 2023-07-24 23:10:42,875 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=125, resume processing ppid=123 2023-07-24 23:10:42,875 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=125, ppid=123, state=SUCCESS; OpenRegionProcedure 07163ab4ec4541d8899adbf059caab34, server=jenkins-hbase4.apache.org,42429,1690240220974 in 173 msec 2023-07-24 23:10:42,876 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=123, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=07163ab4ec4541d8899adbf059caab34, REOPEN/MOVE in 500 msec 2023-07-24 23:10:43,377 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] procedure.ProcedureSyncWait(216): waitFor pid=123 2023-07-24 23:10:43,377 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group normal. 2023-07-24 23:10:43,377 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 23:10:43,381 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:43,381 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:43,383 INFO [Listener at localhost/39785] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:43,384 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-24 23:10:43,384 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 23:10:43,384 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-24 23:10:43,384 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:43,385 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-24 23:10:43,385 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 23:10:43,385 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldgroup to newgroup 2023-07-24 23:10:43,388 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-24 23:10:43,388 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:43,388 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:43,389 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-24 23:10:43,390 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 9 2023-07-24 23:10:43,392 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RenameRSGroup 2023-07-24 23:10:43,395 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:43,395 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:43,398 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=newgroup 2023-07-24 23:10:43,398 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:43,399 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-24 23:10:43,399 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 23:10:43,400 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-24 23:10:43,400 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 23:10:43,403 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:43,403 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:43,405 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup default 2023-07-24 23:10:43,409 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-24 23:10:43,409 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:43,410 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:43,410 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-24 23:10:43,411 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-24 23:10:43,416 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup default 2023-07-24 23:10:43,416 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(345): Moving region 07163ab4ec4541d8899adbf059caab34 to RSGroup default 2023-07-24 23:10:43,417 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] procedure2.ProcedureExecutor(1029): Stored pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=07163ab4ec4541d8899adbf059caab34, REOPEN/MOVE 2023-07-24 23:10:43,417 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-24 23:10:43,417 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=07163ab4ec4541d8899adbf059caab34, REOPEN/MOVE 2023-07-24 23:10:43,418 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=07163ab4ec4541d8899adbf059caab34, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42429,1690240220974 2023-07-24 23:10:43,418 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690240243418"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240243418"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240243418"}]},"ts":"1690240243418"} 2023-07-24 23:10:43,419 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=127, ppid=126, state=RUNNABLE; CloseRegionProcedure 07163ab4ec4541d8899adbf059caab34, server=jenkins-hbase4.apache.org,42429,1690240220974}] 2023-07-24 23:10:43,572 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 07163ab4ec4541d8899adbf059caab34 2023-07-24 23:10:43,573 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 07163ab4ec4541d8899adbf059caab34, disabling compactions & flushes 2023-07-24 23:10:43,573 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34. 2023-07-24 23:10:43,574 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34. 2023-07-24 23:10:43,574 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34. after waiting 0 ms 2023-07-24 23:10:43,574 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34. 2023-07-24 23:10:43,578 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/unmovedTable/07163ab4ec4541d8899adbf059caab34/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 23:10:43,578 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34. 2023-07-24 23:10:43,579 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 07163ab4ec4541d8899adbf059caab34: 2023-07-24 23:10:43,579 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 07163ab4ec4541d8899adbf059caab34 move to jenkins-hbase4.apache.org,46215,1690240224735 record at close sequenceid=5 2023-07-24 23:10:43,580 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 07163ab4ec4541d8899adbf059caab34 2023-07-24 23:10:43,580 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=07163ab4ec4541d8899adbf059caab34, regionState=CLOSED 2023-07-24 23:10:43,581 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690240243580"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240243580"}]},"ts":"1690240243580"} 2023-07-24 23:10:43,584 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=127, resume processing ppid=126 2023-07-24 23:10:43,584 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=126, state=SUCCESS; CloseRegionProcedure 07163ab4ec4541d8899adbf059caab34, server=jenkins-hbase4.apache.org,42429,1690240220974 in 163 msec 2023-07-24 23:10:43,584 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=07163ab4ec4541d8899adbf059caab34, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,46215,1690240224735; forceNewPlan=false, retain=false 2023-07-24 23:10:43,735 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=07163ab4ec4541d8899adbf059caab34, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46215,1690240224735 2023-07-24 23:10:43,735 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690240243735"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240243735"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240243735"}]},"ts":"1690240243735"} 2023-07-24 23:10:43,737 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=128, ppid=126, state=RUNNABLE; OpenRegionProcedure 07163ab4ec4541d8899adbf059caab34, server=jenkins-hbase4.apache.org,46215,1690240224735}] 2023-07-24 23:10:43,892 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34. 2023-07-24 23:10:43,892 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 07163ab4ec4541d8899adbf059caab34, NAME => 'unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34.', STARTKEY => '', ENDKEY => ''} 2023-07-24 23:10:43,893 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 07163ab4ec4541d8899adbf059caab34 2023-07-24 23:10:43,893 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:43,893 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 07163ab4ec4541d8899adbf059caab34 2023-07-24 23:10:43,893 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 07163ab4ec4541d8899adbf059caab34 2023-07-24 23:10:43,894 INFO [StoreOpener-07163ab4ec4541d8899adbf059caab34-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 07163ab4ec4541d8899adbf059caab34 2023-07-24 23:10:43,895 DEBUG [StoreOpener-07163ab4ec4541d8899adbf059caab34-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/unmovedTable/07163ab4ec4541d8899adbf059caab34/ut 2023-07-24 23:10:43,895 DEBUG [StoreOpener-07163ab4ec4541d8899adbf059caab34-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/unmovedTable/07163ab4ec4541d8899adbf059caab34/ut 2023-07-24 23:10:43,895 INFO [StoreOpener-07163ab4ec4541d8899adbf059caab34-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 07163ab4ec4541d8899adbf059caab34 columnFamilyName ut 2023-07-24 23:10:43,896 INFO [StoreOpener-07163ab4ec4541d8899adbf059caab34-1] regionserver.HStore(310): Store=07163ab4ec4541d8899adbf059caab34/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:43,897 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/unmovedTable/07163ab4ec4541d8899adbf059caab34 2023-07-24 23:10:43,898 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/unmovedTable/07163ab4ec4541d8899adbf059caab34 2023-07-24 23:10:43,900 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 07163ab4ec4541d8899adbf059caab34 2023-07-24 23:10:43,901 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 07163ab4ec4541d8899adbf059caab34; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10063018400, jitterRate=-0.06280837953090668}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 23:10:43,901 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 07163ab4ec4541d8899adbf059caab34: 2023-07-24 23:10:43,902 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34., pid=128, masterSystemTime=1690240243888 2023-07-24 23:10:43,903 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34. 2023-07-24 23:10:43,903 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34. 2023-07-24 23:10:43,904 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=07163ab4ec4541d8899adbf059caab34, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,46215,1690240224735 2023-07-24 23:10:43,904 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690240243904"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690240243904"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690240243904"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690240243904"}]},"ts":"1690240243904"} 2023-07-24 23:10:43,907 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=128, resume processing ppid=126 2023-07-24 23:10:43,907 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=128, ppid=126, state=SUCCESS; OpenRegionProcedure 07163ab4ec4541d8899adbf059caab34, server=jenkins-hbase4.apache.org,46215,1690240224735 in 168 msec 2023-07-24 23:10:43,908 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=126, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=07163ab4ec4541d8899adbf059caab34, REOPEN/MOVE in 491 msec 2023-07-24 23:10:44,417 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] procedure.ProcedureSyncWait(216): waitFor pid=126 2023-07-24 23:10:44,417 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group default. 2023-07-24 23:10:44,417 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 23:10:44,419 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42429] to rsgroup default 2023-07-24 23:10:44,421 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-24 23:10:44,421 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:44,422 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:44,422 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-24 23:10:44,422 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-24 23:10:44,424 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group normal, current retry=0 2023-07-24 23:10:44,424 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,42429,1690240220974] are moved back to normal 2023-07-24 23:10:44,424 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(438): Move servers done: normal => default 2023-07-24 23:10:44,424 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 23:10:44,425 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup normal 2023-07-24 23:10:44,428 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:44,429 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:44,429 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-24 23:10:44,430 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-24 23:10:44,431 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 23:10:44,431 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 23:10:44,432 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 23:10:44,432 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 23:10:44,432 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 23:10:44,432 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 23:10:44,433 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 23:10:44,436 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:44,436 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-24 23:10:44,437 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-24 23:10:44,438 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 23:10:44,439 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup default 2023-07-24 23:10:44,441 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:44,441 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-24 23:10:44,441 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 23:10:44,443 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup default 2023-07-24 23:10:44,444 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(345): Moving region e5172a504c1b9d74aaf33c65006a1502 to RSGroup default 2023-07-24 23:10:44,444 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] procedure2.ProcedureExecutor(1029): Stored pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=e5172a504c1b9d74aaf33c65006a1502, REOPEN/MOVE 2023-07-24 23:10:44,444 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-24 23:10:44,444 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=e5172a504c1b9d74aaf33c65006a1502, REOPEN/MOVE 2023-07-24 23:10:44,445 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=e5172a504c1b9d74aaf33c65006a1502, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36981,1690240220580 2023-07-24 23:10:44,445 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690240244445"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240244445"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240244445"}]},"ts":"1690240244445"} 2023-07-24 23:10:44,446 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=130, ppid=129, state=RUNNABLE; CloseRegionProcedure e5172a504c1b9d74aaf33c65006a1502, server=jenkins-hbase4.apache.org,36981,1690240220580}] 2023-07-24 23:10:44,518 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-24 23:10:44,599 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e5172a504c1b9d74aaf33c65006a1502 2023-07-24 23:10:44,600 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e5172a504c1b9d74aaf33c65006a1502, disabling compactions & flushes 2023-07-24 23:10:44,600 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502. 2023-07-24 23:10:44,600 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502. 2023-07-24 23:10:44,600 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502. after waiting 0 ms 2023-07-24 23:10:44,600 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502. 2023-07-24 23:10:44,605 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/testRename/e5172a504c1b9d74aaf33c65006a1502/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 23:10:44,606 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502. 2023-07-24 23:10:44,606 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e5172a504c1b9d74aaf33c65006a1502: 2023-07-24 23:10:44,607 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding e5172a504c1b9d74aaf33c65006a1502 move to jenkins-hbase4.apache.org,42429,1690240220974 record at close sequenceid=5 2023-07-24 23:10:44,608 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e5172a504c1b9d74aaf33c65006a1502 2023-07-24 23:10:44,609 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=e5172a504c1b9d74aaf33c65006a1502, regionState=CLOSED 2023-07-24 23:10:44,609 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690240244609"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240244609"}]},"ts":"1690240244609"} 2023-07-24 23:10:44,614 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=130, resume processing ppid=129 2023-07-24 23:10:44,614 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=129, state=SUCCESS; CloseRegionProcedure e5172a504c1b9d74aaf33c65006a1502, server=jenkins-hbase4.apache.org,36981,1690240220580 in 164 msec 2023-07-24 23:10:44,615 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=e5172a504c1b9d74aaf33c65006a1502, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,42429,1690240220974; forceNewPlan=false, retain=false 2023-07-24 23:10:44,766 INFO [jenkins-hbase4:42959] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 23:10:44,766 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=e5172a504c1b9d74aaf33c65006a1502, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42429,1690240220974 2023-07-24 23:10:44,766 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690240244766"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240244766"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240244766"}]},"ts":"1690240244766"} 2023-07-24 23:10:44,768 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=131, ppid=129, state=RUNNABLE; OpenRegionProcedure e5172a504c1b9d74aaf33c65006a1502, server=jenkins-hbase4.apache.org,42429,1690240220974}] 2023-07-24 23:10:44,923 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502. 2023-07-24 23:10:44,923 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e5172a504c1b9d74aaf33c65006a1502, NAME => 'testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502.', STARTKEY => '', ENDKEY => ''} 2023-07-24 23:10:44,924 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename e5172a504c1b9d74aaf33c65006a1502 2023-07-24 23:10:44,924 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:44,924 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e5172a504c1b9d74aaf33c65006a1502 2023-07-24 23:10:44,924 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e5172a504c1b9d74aaf33c65006a1502 2023-07-24 23:10:44,925 INFO [StoreOpener-e5172a504c1b9d74aaf33c65006a1502-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region e5172a504c1b9d74aaf33c65006a1502 2023-07-24 23:10:44,926 DEBUG [StoreOpener-e5172a504c1b9d74aaf33c65006a1502-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/testRename/e5172a504c1b9d74aaf33c65006a1502/tr 2023-07-24 23:10:44,926 DEBUG [StoreOpener-e5172a504c1b9d74aaf33c65006a1502-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/testRename/e5172a504c1b9d74aaf33c65006a1502/tr 2023-07-24 23:10:44,927 INFO [StoreOpener-e5172a504c1b9d74aaf33c65006a1502-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e5172a504c1b9d74aaf33c65006a1502 columnFamilyName tr 2023-07-24 23:10:44,928 INFO [StoreOpener-e5172a504c1b9d74aaf33c65006a1502-1] regionserver.HStore(310): Store=e5172a504c1b9d74aaf33c65006a1502/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:44,929 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/testRename/e5172a504c1b9d74aaf33c65006a1502 2023-07-24 23:10:44,930 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/testRename/e5172a504c1b9d74aaf33c65006a1502 2023-07-24 23:10:44,934 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e5172a504c1b9d74aaf33c65006a1502 2023-07-24 23:10:44,935 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e5172a504c1b9d74aaf33c65006a1502; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9781676480, jitterRate=-0.08901038765907288}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 23:10:44,935 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e5172a504c1b9d74aaf33c65006a1502: 2023-07-24 23:10:44,936 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502., pid=131, masterSystemTime=1690240244919 2023-07-24 23:10:44,937 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502. 2023-07-24 23:10:44,938 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502. 2023-07-24 23:10:44,938 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=e5172a504c1b9d74aaf33c65006a1502, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,42429,1690240220974 2023-07-24 23:10:44,938 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690240244938"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690240244938"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690240244938"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690240244938"}]},"ts":"1690240244938"} 2023-07-24 23:10:44,941 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=131, resume processing ppid=129 2023-07-24 23:10:44,941 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=129, state=SUCCESS; OpenRegionProcedure e5172a504c1b9d74aaf33c65006a1502, server=jenkins-hbase4.apache.org,42429,1690240220974 in 172 msec 2023-07-24 23:10:44,942 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=129, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=e5172a504c1b9d74aaf33c65006a1502, REOPEN/MOVE in 497 msec 2023-07-24 23:10:45,444 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] procedure.ProcedureSyncWait(216): waitFor pid=129 2023-07-24 23:10:45,445 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group default. 2023-07-24 23:10:45,445 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 23:10:45,446 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33649, jenkins-hbase4.apache.org:36981] to rsgroup default 2023-07-24 23:10:45,448 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:45,448 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-24 23:10:45,449 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 23:10:45,450 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group newgroup, current retry=0 2023-07-24 23:10:45,450 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33649,1690240221185, jenkins-hbase4.apache.org,36981,1690240220580] are moved back to newgroup 2023-07-24 23:10:45,450 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(438): Move servers done: newgroup => default 2023-07-24 23:10:45,450 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 23:10:45,451 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup newgroup 2023-07-24 23:10:45,454 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:45,455 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 23:10:45,456 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 23:10:45,458 INFO [Listener at localhost/39785] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 23:10:45,459 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 23:10:45,460 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:45,461 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:45,463 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 23:10:45,467 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 23:10:45,469 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:45,469 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:45,471 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42959] to rsgroup master 2023-07-24 23:10:45,471 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 23:10:45,471 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.CallRunner(144): callId: 762 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34864 deadline: 1690241445471, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. 2023-07-24 23:10:45,472 WARN [Listener at localhost/39785] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 23:10:45,474 INFO [Listener at localhost/39785] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:45,474 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:45,474 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:45,475 INFO [Listener at localhost/39785] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33649, jenkins-hbase4.apache.org:36981, jenkins-hbase4.apache.org:42429, jenkins-hbase4.apache.org:46215], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 23:10:45,475 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 23:10:45,475 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:45,528 INFO [Listener at localhost/39785] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=509 (was 514), OpenFileDescriptor=773 (was 801), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=342 (was 372), ProcessCount=177 (was 177), AvailableMemoryMB=5751 (was 5873) 2023-07-24 23:10:45,528 WARN [Listener at localhost/39785] hbase.ResourceChecker(130): Thread=509 is superior to 500 2023-07-24 23:10:45,557 INFO [Listener at localhost/39785] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=508, OpenFileDescriptor=771, MaxFileDescriptor=60000, SystemLoadAverage=342, ProcessCount=177, AvailableMemoryMB=5750 2023-07-24 23:10:45,558 WARN [Listener at localhost/39785] hbase.ResourceChecker(130): Thread=508 is superior to 500 2023-07-24 23:10:45,558 INFO [Listener at localhost/39785] rsgroup.TestRSGroupsBase(132): testBogusArgs 2023-07-24 23:10:45,563 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:45,563 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:45,564 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 23:10:45,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 23:10:45,564 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 23:10:45,565 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 23:10:45,565 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 23:10:45,566 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 23:10:45,574 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:45,575 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 23:10:45,577 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 23:10:45,581 INFO [Listener at localhost/39785] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 23:10:45,583 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 23:10:45,586 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:45,587 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:45,588 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 23:10:45,591 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 23:10:45,594 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:45,594 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:45,596 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42959] to rsgroup master 2023-07-24 23:10:45,596 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 23:10:45,596 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.CallRunner(144): callId: 790 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34864 deadline: 1690241445596, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. 2023-07-24 23:10:45,597 WARN [Listener at localhost/39785] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 23:10:45,598 INFO [Listener at localhost/39785] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:45,599 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:45,599 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:45,600 INFO [Listener at localhost/39785] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33649, jenkins-hbase4.apache.org:36981, jenkins-hbase4.apache.org:42429, jenkins-hbase4.apache.org:46215], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 23:10:45,600 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 23:10:45,600 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:45,602 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=nonexistent 2023-07-24 23:10:45,602 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 23:10:45,610 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(334): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, server=bogus:123 2023-07-24 23:10:45,610 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfServer 2023-07-24 23:10:45,611 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bogus 2023-07-24 23:10:45,611 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:45,612 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bogus 2023-07-24 23:10:45,612 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:486) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 23:10:45,612 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.CallRunner(144): callId: 802 service: MasterService methodName: ExecMasterService size: 87 connection: 172.31.14.131:34864 deadline: 1690241445612, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist 2023-07-24 23:10:45,614 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [bogus:123] to rsgroup bogus 2023-07-24 23:10:45,614 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.getAndCheckRSGroupInfo(RSGroupAdminServer.java:115) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:398) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 23:10:45,614 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.CallRunner(144): callId: 805 service: MasterService methodName: ExecMasterService size: 96 connection: 172.31.14.131:34864 deadline: 1690241445614, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-24 23:10:45,616 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-24 23:10:45,617 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=true 2023-07-24 23:10:45,624 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(292): Client=jenkins//172.31.14.131 balance rsgroup, group=bogus 2023-07-24 23:10:45,625 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.balanceRSGroup(RSGroupAdminServer.java:523) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.balanceRSGroup(RSGroupAdminEndpoint.java:299) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16213) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 23:10:45,625 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.CallRunner(144): callId: 809 service: MasterService methodName: ExecMasterService size: 88 connection: 172.31.14.131:34864 deadline: 1690241445623, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-24 23:10:45,629 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:45,629 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:45,630 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 23:10:45,630 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 23:10:45,630 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 23:10:45,631 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 23:10:45,631 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 23:10:45,631 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 23:10:45,634 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:45,635 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 23:10:45,636 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 23:10:45,639 INFO [Listener at localhost/39785] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 23:10:45,639 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 23:10:45,641 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:45,641 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:45,642 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 23:10:45,648 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 23:10:45,650 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:45,650 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:45,652 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42959] to rsgroup master 2023-07-24 23:10:45,654 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 23:10:45,655 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.CallRunner(144): callId: 833 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34864 deadline: 1690241445652, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. 2023-07-24 23:10:45,655 WARN [Listener at localhost/39785] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 23:10:45,656 INFO [Listener at localhost/39785] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:45,657 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:45,657 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:45,657 INFO [Listener at localhost/39785] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33649, jenkins-hbase4.apache.org:36981, jenkins-hbase4.apache.org:42429, jenkins-hbase4.apache.org:46215], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 23:10:45,657 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 23:10:45,657 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:45,674 INFO [Listener at localhost/39785] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=512 (was 508) Potentially hanging thread: hconnection-0x63581179-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x68365814-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x68365814-shared-pool-26 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63581179-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=771 (was 771), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=342 (was 342), ProcessCount=177 (was 177), AvailableMemoryMB=5750 (was 5750) 2023-07-24 23:10:45,674 WARN [Listener at localhost/39785] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-24 23:10:45,690 INFO [Listener at localhost/39785] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=512, OpenFileDescriptor=771, MaxFileDescriptor=60000, SystemLoadAverage=342, ProcessCount=177, AvailableMemoryMB=5749 2023-07-24 23:10:45,691 WARN [Listener at localhost/39785] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-24 23:10:45,691 INFO [Listener at localhost/39785] rsgroup.TestRSGroupsBase(132): testDisabledTableMove 2023-07-24 23:10:45,694 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:45,695 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:45,695 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 23:10:45,695 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 23:10:45,695 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 23:10:45,696 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 23:10:45,696 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 23:10:45,697 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 23:10:45,700 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:45,700 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 23:10:45,702 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 23:10:45,704 INFO [Listener at localhost/39785] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 23:10:45,705 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 23:10:45,708 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:45,708 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:45,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 23:10:45,711 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 23:10:45,714 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:45,714 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:45,715 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42959] to rsgroup master 2023-07-24 23:10:45,715 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 23:10:45,716 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.CallRunner(144): callId: 861 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34864 deadline: 1690241445715, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. 2023-07-24 23:10:45,716 WARN [Listener at localhost/39785] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 23:10:45,717 INFO [Listener at localhost/39785] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:45,718 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:45,718 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:45,718 INFO [Listener at localhost/39785] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33649, jenkins-hbase4.apache.org:36981, jenkins-hbase4.apache.org:42429, jenkins-hbase4.apache.org:46215], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 23:10:45,719 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 23:10:45,719 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:45,720 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 23:10:45,720 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:45,720 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testDisabledTableMove_1788690509 2023-07-24 23:10:45,722 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:45,722 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:45,723 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1788690509 2023-07-24 23:10:45,727 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 23:10:45,728 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 23:10:45,730 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:45,730 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:45,733 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33649, jenkins-hbase4.apache.org:36981] to rsgroup Group_testDisabledTableMove_1788690509 2023-07-24 23:10:45,734 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:45,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:45,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1788690509 2023-07-24 23:10:45,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 23:10:45,736 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-24 23:10:45,736 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33649,1690240221185, jenkins-hbase4.apache.org,36981,1690240220580] are moved back to default 2023-07-24 23:10:45,736 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testDisabledTableMove_1788690509 2023-07-24 23:10:45,737 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 23:10:45,739 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:45,739 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:45,741 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testDisabledTableMove_1788690509 2023-07-24 23:10:45,741 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:45,743 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 23:10:45,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] procedure2.ProcedureExecutor(1029): Stored pid=132, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testDisabledTableMove 2023-07-24 23:10:45,746 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 23:10:45,746 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testDisabledTableMove" procId is: 132 2023-07-24 23:10:45,746 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-24 23:10:45,748 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:45,748 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:45,748 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1788690509 2023-07-24 23:10:45,749 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 23:10:45,752 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 23:10:45,756 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testDisabledTableMove/7dadbaacb571422dbc56cfb5b7eed574 2023-07-24 23:10:45,756 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testDisabledTableMove/d0eea26678a47a60b1f7e8952a5886d1 2023-07-24 23:10:45,756 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testDisabledTableMove/ae065852d9cf91abfaab0c45904f11e3 2023-07-24 23:10:45,756 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testDisabledTableMove/9c3de9391b88a722a7ac6bd0ad977d1d 2023-07-24 23:10:45,756 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testDisabledTableMove/f6c0e544daaf362d155e8db195223970 2023-07-24 23:10:45,757 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testDisabledTableMove/7dadbaacb571422dbc56cfb5b7eed574 empty. 2023-07-24 23:10:45,757 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testDisabledTableMove/f6c0e544daaf362d155e8db195223970 empty. 2023-07-24 23:10:45,757 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testDisabledTableMove/9c3de9391b88a722a7ac6bd0ad977d1d empty. 2023-07-24 23:10:45,757 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testDisabledTableMove/ae065852d9cf91abfaab0c45904f11e3 empty. 2023-07-24 23:10:45,757 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testDisabledTableMove/d0eea26678a47a60b1f7e8952a5886d1 empty. 2023-07-24 23:10:45,757 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testDisabledTableMove/7dadbaacb571422dbc56cfb5b7eed574 2023-07-24 23:10:45,757 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testDisabledTableMove/f6c0e544daaf362d155e8db195223970 2023-07-24 23:10:45,758 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testDisabledTableMove/ae065852d9cf91abfaab0c45904f11e3 2023-07-24 23:10:45,758 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testDisabledTableMove/9c3de9391b88a722a7ac6bd0ad977d1d 2023-07-24 23:10:45,758 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testDisabledTableMove/d0eea26678a47a60b1f7e8952a5886d1 2023-07-24 23:10:45,758 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-24 23:10:45,780 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testDisabledTableMove/.tabledesc/.tableinfo.0000000001 2023-07-24 23:10:45,782 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => d0eea26678a47a60b1f7e8952a5886d1, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1690240245742.d0eea26678a47a60b1f7e8952a5886d1.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp 2023-07-24 23:10:45,782 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => f6c0e544daaf362d155e8db195223970, NAME => 'Group_testDisabledTableMove,aaaaa,1690240245742.f6c0e544daaf362d155e8db195223970.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp 2023-07-24 23:10:45,782 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 7dadbaacb571422dbc56cfb5b7eed574, NAME => 'Group_testDisabledTableMove,,1690240245742.7dadbaacb571422dbc56cfb5b7eed574.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp 2023-07-24 23:10:45,799 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1690240245742.d0eea26678a47a60b1f7e8952a5886d1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:45,799 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing d0eea26678a47a60b1f7e8952a5886d1, disabling compactions & flushes 2023-07-24 23:10:45,799 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1690240245742.d0eea26678a47a60b1f7e8952a5886d1. 2023-07-24 23:10:45,799 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1690240245742.d0eea26678a47a60b1f7e8952a5886d1. 2023-07-24 23:10:45,799 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1690240245742.d0eea26678a47a60b1f7e8952a5886d1. after waiting 0 ms 2023-07-24 23:10:45,799 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1690240245742.d0eea26678a47a60b1f7e8952a5886d1. 2023-07-24 23:10:45,799 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1690240245742.d0eea26678a47a60b1f7e8952a5886d1. 2023-07-24 23:10:45,799 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for d0eea26678a47a60b1f7e8952a5886d1: 2023-07-24 23:10:45,800 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 9c3de9391b88a722a7ac6bd0ad977d1d, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690240245742.9c3de9391b88a722a7ac6bd0ad977d1d.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp 2023-07-24 23:10:45,811 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1690240245742.7dadbaacb571422dbc56cfb5b7eed574.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:45,811 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 7dadbaacb571422dbc56cfb5b7eed574, disabling compactions & flushes 2023-07-24 23:10:45,812 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1690240245742.7dadbaacb571422dbc56cfb5b7eed574. 2023-07-24 23:10:45,812 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1690240245742.7dadbaacb571422dbc56cfb5b7eed574. 2023-07-24 23:10:45,812 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1690240245742.7dadbaacb571422dbc56cfb5b7eed574. after waiting 0 ms 2023-07-24 23:10:45,812 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1690240245742.7dadbaacb571422dbc56cfb5b7eed574. 2023-07-24 23:10:45,812 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1690240245742.7dadbaacb571422dbc56cfb5b7eed574. 2023-07-24 23:10:45,812 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 7dadbaacb571422dbc56cfb5b7eed574: 2023-07-24 23:10:45,812 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => ae065852d9cf91abfaab0c45904f11e3, NAME => 'Group_testDisabledTableMove,zzzzz,1690240245742.ae065852d9cf91abfaab0c45904f11e3.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp 2023-07-24 23:10:45,822 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1690240245742.f6c0e544daaf362d155e8db195223970.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:45,822 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing f6c0e544daaf362d155e8db195223970, disabling compactions & flushes 2023-07-24 23:10:45,822 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1690240245742.f6c0e544daaf362d155e8db195223970. 2023-07-24 23:10:45,822 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1690240245742.f6c0e544daaf362d155e8db195223970. 2023-07-24 23:10:45,822 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1690240245742.f6c0e544daaf362d155e8db195223970. after waiting 0 ms 2023-07-24 23:10:45,822 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1690240245742.f6c0e544daaf362d155e8db195223970. 2023-07-24 23:10:45,822 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1690240245742.f6c0e544daaf362d155e8db195223970. 2023-07-24 23:10:45,822 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for f6c0e544daaf362d155e8db195223970: 2023-07-24 23:10:45,831 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690240245742.9c3de9391b88a722a7ac6bd0ad977d1d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:45,831 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 9c3de9391b88a722a7ac6bd0ad977d1d, disabling compactions & flushes 2023-07-24 23:10:45,831 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690240245742.9c3de9391b88a722a7ac6bd0ad977d1d. 2023-07-24 23:10:45,831 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690240245742.9c3de9391b88a722a7ac6bd0ad977d1d. 2023-07-24 23:10:45,831 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690240245742.9c3de9391b88a722a7ac6bd0ad977d1d. after waiting 0 ms 2023-07-24 23:10:45,831 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690240245742.9c3de9391b88a722a7ac6bd0ad977d1d. 2023-07-24 23:10:45,831 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690240245742.9c3de9391b88a722a7ac6bd0ad977d1d. 2023-07-24 23:10:45,831 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 9c3de9391b88a722a7ac6bd0ad977d1d: 2023-07-24 23:10:45,842 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1690240245742.ae065852d9cf91abfaab0c45904f11e3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:45,842 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing ae065852d9cf91abfaab0c45904f11e3, disabling compactions & flushes 2023-07-24 23:10:45,842 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1690240245742.ae065852d9cf91abfaab0c45904f11e3. 2023-07-24 23:10:45,842 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1690240245742.ae065852d9cf91abfaab0c45904f11e3. 2023-07-24 23:10:45,842 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1690240245742.ae065852d9cf91abfaab0c45904f11e3. after waiting 0 ms 2023-07-24 23:10:45,842 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1690240245742.ae065852d9cf91abfaab0c45904f11e3. 2023-07-24 23:10:45,842 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1690240245742.ae065852d9cf91abfaab0c45904f11e3. 2023-07-24 23:10:45,842 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for ae065852d9cf91abfaab0c45904f11e3: 2023-07-24 23:10:45,845 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 23:10:45,846 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690240245742.d0eea26678a47a60b1f7e8952a5886d1.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690240245846"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240245846"}]},"ts":"1690240245846"} 2023-07-24 23:10:45,846 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1690240245742.7dadbaacb571422dbc56cfb5b7eed574.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690240245846"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240245846"}]},"ts":"1690240245846"} 2023-07-24 23:10:45,846 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1690240245742.f6c0e544daaf362d155e8db195223970.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690240245846"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240245846"}]},"ts":"1690240245846"} 2023-07-24 23:10:45,846 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690240245742.9c3de9391b88a722a7ac6bd0ad977d1d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690240245846"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240245846"}]},"ts":"1690240245846"} 2023-07-24 23:10:45,846 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1690240245742.ae065852d9cf91abfaab0c45904f11e3.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690240245846"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240245846"}]},"ts":"1690240245846"} 2023-07-24 23:10:45,847 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-24 23:10:45,848 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-24 23:10:45,849 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 23:10:45,849 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690240245849"}]},"ts":"1690240245849"} 2023-07-24 23:10:45,850 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLING in hbase:meta 2023-07-24 23:10:45,853 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 23:10:45,853 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 23:10:45,853 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 23:10:45,853 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 23:10:45,854 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7dadbaacb571422dbc56cfb5b7eed574, ASSIGN}, {pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f6c0e544daaf362d155e8db195223970, ASSIGN}, {pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=d0eea26678a47a60b1f7e8952a5886d1, ASSIGN}, {pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=9c3de9391b88a722a7ac6bd0ad977d1d, ASSIGN}, {pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ae065852d9cf91abfaab0c45904f11e3, ASSIGN}] 2023-07-24 23:10:45,856 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7dadbaacb571422dbc56cfb5b7eed574, ASSIGN 2023-07-24 23:10:45,856 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f6c0e544daaf362d155e8db195223970, ASSIGN 2023-07-24 23:10:45,856 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=d0eea26678a47a60b1f7e8952a5886d1, ASSIGN 2023-07-24 23:10:45,856 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=9c3de9391b88a722a7ac6bd0ad977d1d, ASSIGN 2023-07-24 23:10:45,857 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f6c0e544daaf362d155e8db195223970, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42429,1690240220974; forceNewPlan=false, retain=false 2023-07-24 23:10:45,857 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=d0eea26678a47a60b1f7e8952a5886d1, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42429,1690240220974; forceNewPlan=false, retain=false 2023-07-24 23:10:45,857 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7dadbaacb571422dbc56cfb5b7eed574, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46215,1690240224735; forceNewPlan=false, retain=false 2023-07-24 23:10:45,859 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=9c3de9391b88a722a7ac6bd0ad977d1d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46215,1690240224735; forceNewPlan=false, retain=false 2023-07-24 23:10:45,859 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ae065852d9cf91abfaab0c45904f11e3, ASSIGN 2023-07-24 23:10:45,860 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ae065852d9cf91abfaab0c45904f11e3, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42429,1690240220974; forceNewPlan=false, retain=false 2023-07-24 23:10:46,008 INFO [jenkins-hbase4:42959] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-24 23:10:46,011 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=7dadbaacb571422dbc56cfb5b7eed574, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46215,1690240224735 2023-07-24 23:10:46,011 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=f6c0e544daaf362d155e8db195223970, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42429,1690240220974 2023-07-24 23:10:46,011 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1690240245742.7dadbaacb571422dbc56cfb5b7eed574.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690240246011"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240246011"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240246011"}]},"ts":"1690240246011"} 2023-07-24 23:10:46,011 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=9c3de9391b88a722a7ac6bd0ad977d1d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46215,1690240224735 2023-07-24 23:10:46,011 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=d0eea26678a47a60b1f7e8952a5886d1, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42429,1690240220974 2023-07-24 23:10:46,011 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=ae065852d9cf91abfaab0c45904f11e3, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42429,1690240220974 2023-07-24 23:10:46,012 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690240245742.d0eea26678a47a60b1f7e8952a5886d1.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690240246011"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240246011"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240246011"}]},"ts":"1690240246011"} 2023-07-24 23:10:46,012 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690240245742.9c3de9391b88a722a7ac6bd0ad977d1d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690240246011"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240246011"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240246011"}]},"ts":"1690240246011"} 2023-07-24 23:10:46,011 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1690240245742.f6c0e544daaf362d155e8db195223970.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690240246011"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240246011"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240246011"}]},"ts":"1690240246011"} 2023-07-24 23:10:46,012 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1690240245742.ae065852d9cf91abfaab0c45904f11e3.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690240246011"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240246011"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240246011"}]},"ts":"1690240246011"} 2023-07-24 23:10:46,013 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=138, ppid=133, state=RUNNABLE; OpenRegionProcedure 7dadbaacb571422dbc56cfb5b7eed574, server=jenkins-hbase4.apache.org,46215,1690240224735}] 2023-07-24 23:10:46,013 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=139, ppid=135, state=RUNNABLE; OpenRegionProcedure d0eea26678a47a60b1f7e8952a5886d1, server=jenkins-hbase4.apache.org,42429,1690240220974}] 2023-07-24 23:10:46,014 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=140, ppid=136, state=RUNNABLE; OpenRegionProcedure 9c3de9391b88a722a7ac6bd0ad977d1d, server=jenkins-hbase4.apache.org,46215,1690240224735}] 2023-07-24 23:10:46,016 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=141, ppid=134, state=RUNNABLE; OpenRegionProcedure f6c0e544daaf362d155e8db195223970, server=jenkins-hbase4.apache.org,42429,1690240220974}] 2023-07-24 23:10:46,017 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=142, ppid=137, state=RUNNABLE; OpenRegionProcedure ae065852d9cf91abfaab0c45904f11e3, server=jenkins-hbase4.apache.org,42429,1690240220974}] 2023-07-24 23:10:46,048 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-24 23:10:46,169 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,,1690240245742.7dadbaacb571422dbc56cfb5b7eed574. 2023-07-24 23:10:46,169 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7dadbaacb571422dbc56cfb5b7eed574, NAME => 'Group_testDisabledTableMove,,1690240245742.7dadbaacb571422dbc56cfb5b7eed574.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-24 23:10:46,169 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,zzzzz,1690240245742.ae065852d9cf91abfaab0c45904f11e3. 2023-07-24 23:10:46,169 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 7dadbaacb571422dbc56cfb5b7eed574 2023-07-24 23:10:46,169 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ae065852d9cf91abfaab0c45904f11e3, NAME => 'Group_testDisabledTableMove,zzzzz,1690240245742.ae065852d9cf91abfaab0c45904f11e3.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-24 23:10:46,169 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1690240245742.7dadbaacb571422dbc56cfb5b7eed574.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:46,169 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7dadbaacb571422dbc56cfb5b7eed574 2023-07-24 23:10:46,169 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7dadbaacb571422dbc56cfb5b7eed574 2023-07-24 23:10:46,169 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove ae065852d9cf91abfaab0c45904f11e3 2023-07-24 23:10:46,170 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1690240245742.ae065852d9cf91abfaab0c45904f11e3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:46,170 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ae065852d9cf91abfaab0c45904f11e3 2023-07-24 23:10:46,170 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ae065852d9cf91abfaab0c45904f11e3 2023-07-24 23:10:46,171 INFO [StoreOpener-7dadbaacb571422dbc56cfb5b7eed574-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 7dadbaacb571422dbc56cfb5b7eed574 2023-07-24 23:10:46,171 INFO [StoreOpener-ae065852d9cf91abfaab0c45904f11e3-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ae065852d9cf91abfaab0c45904f11e3 2023-07-24 23:10:46,172 DEBUG [StoreOpener-7dadbaacb571422dbc56cfb5b7eed574-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testDisabledTableMove/7dadbaacb571422dbc56cfb5b7eed574/f 2023-07-24 23:10:46,172 DEBUG [StoreOpener-7dadbaacb571422dbc56cfb5b7eed574-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testDisabledTableMove/7dadbaacb571422dbc56cfb5b7eed574/f 2023-07-24 23:10:46,172 DEBUG [StoreOpener-ae065852d9cf91abfaab0c45904f11e3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testDisabledTableMove/ae065852d9cf91abfaab0c45904f11e3/f 2023-07-24 23:10:46,173 DEBUG [StoreOpener-ae065852d9cf91abfaab0c45904f11e3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testDisabledTableMove/ae065852d9cf91abfaab0c45904f11e3/f 2023-07-24 23:10:46,173 INFO [StoreOpener-7dadbaacb571422dbc56cfb5b7eed574-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7dadbaacb571422dbc56cfb5b7eed574 columnFamilyName f 2023-07-24 23:10:46,173 INFO [StoreOpener-ae065852d9cf91abfaab0c45904f11e3-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ae065852d9cf91abfaab0c45904f11e3 columnFamilyName f 2023-07-24 23:10:46,173 INFO [StoreOpener-7dadbaacb571422dbc56cfb5b7eed574-1] regionserver.HStore(310): Store=7dadbaacb571422dbc56cfb5b7eed574/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:46,173 INFO [StoreOpener-ae065852d9cf91abfaab0c45904f11e3-1] regionserver.HStore(310): Store=ae065852d9cf91abfaab0c45904f11e3/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:46,174 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testDisabledTableMove/ae065852d9cf91abfaab0c45904f11e3 2023-07-24 23:10:46,174 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testDisabledTableMove/7dadbaacb571422dbc56cfb5b7eed574 2023-07-24 23:10:46,174 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testDisabledTableMove/ae065852d9cf91abfaab0c45904f11e3 2023-07-24 23:10:46,175 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testDisabledTableMove/7dadbaacb571422dbc56cfb5b7eed574 2023-07-24 23:10:46,177 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ae065852d9cf91abfaab0c45904f11e3 2023-07-24 23:10:46,177 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7dadbaacb571422dbc56cfb5b7eed574 2023-07-24 23:10:46,179 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testDisabledTableMove/ae065852d9cf91abfaab0c45904f11e3/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 23:10:46,180 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testDisabledTableMove/7dadbaacb571422dbc56cfb5b7eed574/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 23:10:46,180 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ae065852d9cf91abfaab0c45904f11e3; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11692649760, jitterRate=0.0889628678560257}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 23:10:46,180 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ae065852d9cf91abfaab0c45904f11e3: 2023-07-24 23:10:46,180 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7dadbaacb571422dbc56cfb5b7eed574; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9954824320, jitterRate=-0.07288473844528198}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 23:10:46,180 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7dadbaacb571422dbc56cfb5b7eed574: 2023-07-24 23:10:46,181 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,zzzzz,1690240245742.ae065852d9cf91abfaab0c45904f11e3., pid=142, masterSystemTime=1690240246166 2023-07-24 23:10:46,181 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,,1690240245742.7dadbaacb571422dbc56cfb5b7eed574., pid=138, masterSystemTime=1690240246165 2023-07-24 23:10:46,182 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,zzzzz,1690240245742.ae065852d9cf91abfaab0c45904f11e3. 2023-07-24 23:10:46,182 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,zzzzz,1690240245742.ae065852d9cf91abfaab0c45904f11e3. 2023-07-24 23:10:46,182 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,i\xBF\x14i\xBE,1690240245742.d0eea26678a47a60b1f7e8952a5886d1. 2023-07-24 23:10:46,182 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d0eea26678a47a60b1f7e8952a5886d1, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1690240245742.d0eea26678a47a60b1f7e8952a5886d1.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-24 23:10:46,183 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove d0eea26678a47a60b1f7e8952a5886d1 2023-07-24 23:10:46,183 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1690240245742.d0eea26678a47a60b1f7e8952a5886d1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:46,183 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d0eea26678a47a60b1f7e8952a5886d1 2023-07-24 23:10:46,183 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d0eea26678a47a60b1f7e8952a5886d1 2023-07-24 23:10:46,183 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=ae065852d9cf91abfaab0c45904f11e3, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42429,1690240220974 2023-07-24 23:10:46,183 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,zzzzz,1690240245742.ae065852d9cf91abfaab0c45904f11e3.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690240246183"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690240246183"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690240246183"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690240246183"}]},"ts":"1690240246183"} 2023-07-24 23:10:46,184 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,,1690240245742.7dadbaacb571422dbc56cfb5b7eed574. 2023-07-24 23:10:46,184 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,,1690240245742.7dadbaacb571422dbc56cfb5b7eed574. 2023-07-24 23:10:46,184 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690240245742.9c3de9391b88a722a7ac6bd0ad977d1d. 2023-07-24 23:10:46,184 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9c3de9391b88a722a7ac6bd0ad977d1d, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690240245742.9c3de9391b88a722a7ac6bd0ad977d1d.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-24 23:10:46,184 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 9c3de9391b88a722a7ac6bd0ad977d1d 2023-07-24 23:10:46,184 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690240245742.9c3de9391b88a722a7ac6bd0ad977d1d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:46,185 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9c3de9391b88a722a7ac6bd0ad977d1d 2023-07-24 23:10:46,185 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9c3de9391b88a722a7ac6bd0ad977d1d 2023-07-24 23:10:46,185 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=7dadbaacb571422dbc56cfb5b7eed574, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46215,1690240224735 2023-07-24 23:10:46,185 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,,1690240245742.7dadbaacb571422dbc56cfb5b7eed574.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690240246185"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690240246185"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690240246185"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690240246185"}]},"ts":"1690240246185"} 2023-07-24 23:10:46,185 INFO [StoreOpener-d0eea26678a47a60b1f7e8952a5886d1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region d0eea26678a47a60b1f7e8952a5886d1 2023-07-24 23:10:46,187 INFO [StoreOpener-9c3de9391b88a722a7ac6bd0ad977d1d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9c3de9391b88a722a7ac6bd0ad977d1d 2023-07-24 23:10:46,187 DEBUG [StoreOpener-d0eea26678a47a60b1f7e8952a5886d1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testDisabledTableMove/d0eea26678a47a60b1f7e8952a5886d1/f 2023-07-24 23:10:46,187 DEBUG [StoreOpener-d0eea26678a47a60b1f7e8952a5886d1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testDisabledTableMove/d0eea26678a47a60b1f7e8952a5886d1/f 2023-07-24 23:10:46,187 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=142, resume processing ppid=137 2023-07-24 23:10:46,187 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=142, ppid=137, state=SUCCESS; OpenRegionProcedure ae065852d9cf91abfaab0c45904f11e3, server=jenkins-hbase4.apache.org,42429,1690240220974 in 168 msec 2023-07-24 23:10:46,188 INFO [StoreOpener-d0eea26678a47a60b1f7e8952a5886d1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d0eea26678a47a60b1f7e8952a5886d1 columnFamilyName f 2023-07-24 23:10:46,188 DEBUG [StoreOpener-9c3de9391b88a722a7ac6bd0ad977d1d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testDisabledTableMove/9c3de9391b88a722a7ac6bd0ad977d1d/f 2023-07-24 23:10:46,188 DEBUG [StoreOpener-9c3de9391b88a722a7ac6bd0ad977d1d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testDisabledTableMove/9c3de9391b88a722a7ac6bd0ad977d1d/f 2023-07-24 23:10:46,188 INFO [StoreOpener-d0eea26678a47a60b1f7e8952a5886d1-1] regionserver.HStore(310): Store=d0eea26678a47a60b1f7e8952a5886d1/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:46,189 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=138, resume processing ppid=133 2023-07-24 23:10:46,189 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=138, ppid=133, state=SUCCESS; OpenRegionProcedure 7dadbaacb571422dbc56cfb5b7eed574, server=jenkins-hbase4.apache.org,46215,1690240224735 in 173 msec 2023-07-24 23:10:46,189 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=137, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ae065852d9cf91abfaab0c45904f11e3, ASSIGN in 333 msec 2023-07-24 23:10:46,189 INFO [StoreOpener-9c3de9391b88a722a7ac6bd0ad977d1d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9c3de9391b88a722a7ac6bd0ad977d1d columnFamilyName f 2023-07-24 23:10:46,189 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testDisabledTableMove/d0eea26678a47a60b1f7e8952a5886d1 2023-07-24 23:10:46,190 INFO [StoreOpener-9c3de9391b88a722a7ac6bd0ad977d1d-1] regionserver.HStore(310): Store=9c3de9391b88a722a7ac6bd0ad977d1d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:46,190 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testDisabledTableMove/d0eea26678a47a60b1f7e8952a5886d1 2023-07-24 23:10:46,190 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=133, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7dadbaacb571422dbc56cfb5b7eed574, ASSIGN in 335 msec 2023-07-24 23:10:46,191 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testDisabledTableMove/9c3de9391b88a722a7ac6bd0ad977d1d 2023-07-24 23:10:46,191 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testDisabledTableMove/9c3de9391b88a722a7ac6bd0ad977d1d 2023-07-24 23:10:46,193 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d0eea26678a47a60b1f7e8952a5886d1 2023-07-24 23:10:46,194 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9c3de9391b88a722a7ac6bd0ad977d1d 2023-07-24 23:10:46,195 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testDisabledTableMove/d0eea26678a47a60b1f7e8952a5886d1/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 23:10:46,195 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d0eea26678a47a60b1f7e8952a5886d1; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10215054080, jitterRate=-0.048648953437805176}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 23:10:46,195 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d0eea26678a47a60b1f7e8952a5886d1: 2023-07-24 23:10:46,196 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testDisabledTableMove/9c3de9391b88a722a7ac6bd0ad977d1d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 23:10:46,196 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,i\xBF\x14i\xBE,1690240245742.d0eea26678a47a60b1f7e8952a5886d1., pid=139, masterSystemTime=1690240246166 2023-07-24 23:10:46,196 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9c3de9391b88a722a7ac6bd0ad977d1d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10205016320, jitterRate=-0.0495837926864624}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 23:10:46,197 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9c3de9391b88a722a7ac6bd0ad977d1d: 2023-07-24 23:10:46,197 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690240245742.9c3de9391b88a722a7ac6bd0ad977d1d., pid=140, masterSystemTime=1690240246165 2023-07-24 23:10:46,197 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,i\xBF\x14i\xBE,1690240245742.d0eea26678a47a60b1f7e8952a5886d1. 2023-07-24 23:10:46,197 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,i\xBF\x14i\xBE,1690240245742.d0eea26678a47a60b1f7e8952a5886d1. 2023-07-24 23:10:46,198 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,aaaaa,1690240245742.f6c0e544daaf362d155e8db195223970. 2023-07-24 23:10:46,198 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f6c0e544daaf362d155e8db195223970, NAME => 'Group_testDisabledTableMove,aaaaa,1690240245742.f6c0e544daaf362d155e8db195223970.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-24 23:10:46,198 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=d0eea26678a47a60b1f7e8952a5886d1, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42429,1690240220974 2023-07-24 23:10:46,198 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove f6c0e544daaf362d155e8db195223970 2023-07-24 23:10:46,198 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1690240245742.f6c0e544daaf362d155e8db195223970.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:46,198 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690240245742.d0eea26678a47a60b1f7e8952a5886d1.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690240246198"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690240246198"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690240246198"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690240246198"}]},"ts":"1690240246198"} 2023-07-24 23:10:46,198 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f6c0e544daaf362d155e8db195223970 2023-07-24 23:10:46,198 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f6c0e544daaf362d155e8db195223970 2023-07-24 23:10:46,198 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690240245742.9c3de9391b88a722a7ac6bd0ad977d1d. 2023-07-24 23:10:46,198 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690240245742.9c3de9391b88a722a7ac6bd0ad977d1d. 2023-07-24 23:10:46,199 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=9c3de9391b88a722a7ac6bd0ad977d1d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46215,1690240224735 2023-07-24 23:10:46,199 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690240245742.9c3de9391b88a722a7ac6bd0ad977d1d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690240246199"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690240246199"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690240246199"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690240246199"}]},"ts":"1690240246199"} 2023-07-24 23:10:46,200 INFO [StoreOpener-f6c0e544daaf362d155e8db195223970-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f6c0e544daaf362d155e8db195223970 2023-07-24 23:10:46,202 DEBUG [StoreOpener-f6c0e544daaf362d155e8db195223970-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testDisabledTableMove/f6c0e544daaf362d155e8db195223970/f 2023-07-24 23:10:46,202 DEBUG [StoreOpener-f6c0e544daaf362d155e8db195223970-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testDisabledTableMove/f6c0e544daaf362d155e8db195223970/f 2023-07-24 23:10:46,202 INFO [StoreOpener-f6c0e544daaf362d155e8db195223970-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f6c0e544daaf362d155e8db195223970 columnFamilyName f 2023-07-24 23:10:46,203 INFO [StoreOpener-f6c0e544daaf362d155e8db195223970-1] regionserver.HStore(310): Store=f6c0e544daaf362d155e8db195223970/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:46,203 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=139, resume processing ppid=135 2023-07-24 23:10:46,203 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=139, ppid=135, state=SUCCESS; OpenRegionProcedure d0eea26678a47a60b1f7e8952a5886d1, server=jenkins-hbase4.apache.org,42429,1690240220974 in 187 msec 2023-07-24 23:10:46,204 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=140, resume processing ppid=136 2023-07-24 23:10:46,204 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=140, ppid=136, state=SUCCESS; OpenRegionProcedure 9c3de9391b88a722a7ac6bd0ad977d1d, server=jenkins-hbase4.apache.org,46215,1690240224735 in 186 msec 2023-07-24 23:10:46,204 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testDisabledTableMove/f6c0e544daaf362d155e8db195223970 2023-07-24 23:10:46,204 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=135, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=d0eea26678a47a60b1f7e8952a5886d1, ASSIGN in 349 msec 2023-07-24 23:10:46,205 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testDisabledTableMove/f6c0e544daaf362d155e8db195223970 2023-07-24 23:10:46,205 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=136, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=9c3de9391b88a722a7ac6bd0ad977d1d, ASSIGN in 350 msec 2023-07-24 23:10:46,207 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f6c0e544daaf362d155e8db195223970 2023-07-24 23:10:46,209 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testDisabledTableMove/f6c0e544daaf362d155e8db195223970/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 23:10:46,210 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f6c0e544daaf362d155e8db195223970; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11811013280, jitterRate=0.09998632967472076}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 23:10:46,210 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f6c0e544daaf362d155e8db195223970: 2023-07-24 23:10:46,211 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,aaaaa,1690240245742.f6c0e544daaf362d155e8db195223970., pid=141, masterSystemTime=1690240246166 2023-07-24 23:10:46,212 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,aaaaa,1690240245742.f6c0e544daaf362d155e8db195223970. 2023-07-24 23:10:46,212 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,aaaaa,1690240245742.f6c0e544daaf362d155e8db195223970. 2023-07-24 23:10:46,212 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=f6c0e544daaf362d155e8db195223970, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42429,1690240220974 2023-07-24 23:10:46,212 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,aaaaa,1690240245742.f6c0e544daaf362d155e8db195223970.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690240246212"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690240246212"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690240246212"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690240246212"}]},"ts":"1690240246212"} 2023-07-24 23:10:46,215 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=141, resume processing ppid=134 2023-07-24 23:10:46,215 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=134, state=SUCCESS; OpenRegionProcedure f6c0e544daaf362d155e8db195223970, server=jenkins-hbase4.apache.org,42429,1690240220974 in 198 msec 2023-07-24 23:10:46,216 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=134, resume processing ppid=132 2023-07-24 23:10:46,216 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=134, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f6c0e544daaf362d155e8db195223970, ASSIGN in 361 msec 2023-07-24 23:10:46,217 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 23:10:46,217 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690240246217"}]},"ts":"1690240246217"} 2023-07-24 23:10:46,218 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLED in hbase:meta 2023-07-24 23:10:46,220 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 23:10:46,221 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=132, state=SUCCESS; CreateTableProcedure table=Group_testDisabledTableMove in 477 msec 2023-07-24 23:10:46,349 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-24 23:10:46,349 INFO [Listener at localhost/39785] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testDisabledTableMove, procId: 132 completed 2023-07-24 23:10:46,350 DEBUG [Listener at localhost/39785] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testDisabledTableMove get assigned. Timeout = 60000ms 2023-07-24 23:10:46,350 INFO [Listener at localhost/39785] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:46,354 INFO [Listener at localhost/39785] hbase.HBaseTestingUtility(3484): All regions for table Group_testDisabledTableMove assigned to meta. Checking AM states. 2023-07-24 23:10:46,355 INFO [Listener at localhost/39785] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:46,355 INFO [Listener at localhost/39785] hbase.HBaseTestingUtility(3504): All regions for table Group_testDisabledTableMove assigned. 2023-07-24 23:10:46,355 INFO [Listener at localhost/39785] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:46,361 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-24 23:10:46,361 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 23:10:46,362 INFO [Listener at localhost/39785] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-24 23:10:46,363 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-24 23:10:46,364 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] procedure2.ProcedureExecutor(1029): Stored pid=143, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testDisabledTableMove 2023-07-24 23:10:46,367 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-24 23:10:46,368 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690240246368"}]},"ts":"1690240246368"} 2023-07-24 23:10:46,369 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLING in hbase:meta 2023-07-24 23:10:46,371 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testDisabledTableMove to state=DISABLING 2023-07-24 23:10:46,372 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=144, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7dadbaacb571422dbc56cfb5b7eed574, UNASSIGN}, {pid=145, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f6c0e544daaf362d155e8db195223970, UNASSIGN}, {pid=146, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=d0eea26678a47a60b1f7e8952a5886d1, UNASSIGN}, {pid=147, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=9c3de9391b88a722a7ac6bd0ad977d1d, UNASSIGN}, {pid=148, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ae065852d9cf91abfaab0c45904f11e3, UNASSIGN}] 2023-07-24 23:10:46,374 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=146, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=d0eea26678a47a60b1f7e8952a5886d1, UNASSIGN 2023-07-24 23:10:46,374 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=144, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7dadbaacb571422dbc56cfb5b7eed574, UNASSIGN 2023-07-24 23:10:46,374 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=147, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=9c3de9391b88a722a7ac6bd0ad977d1d, UNASSIGN 2023-07-24 23:10:46,375 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=145, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f6c0e544daaf362d155e8db195223970, UNASSIGN 2023-07-24 23:10:46,375 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=148, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ae065852d9cf91abfaab0c45904f11e3, UNASSIGN 2023-07-24 23:10:46,375 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=146 updating hbase:meta row=d0eea26678a47a60b1f7e8952a5886d1, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42429,1690240220974 2023-07-24 23:10:46,375 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=7dadbaacb571422dbc56cfb5b7eed574, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46215,1690240224735 2023-07-24 23:10:46,376 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=f6c0e544daaf362d155e8db195223970, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42429,1690240220974 2023-07-24 23:10:46,376 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1690240245742.7dadbaacb571422dbc56cfb5b7eed574.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690240246375"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240246375"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240246375"}]},"ts":"1690240246375"} 2023-07-24 23:10:46,376 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690240245742.d0eea26678a47a60b1f7e8952a5886d1.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690240246375"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240246375"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240246375"}]},"ts":"1690240246375"} 2023-07-24 23:10:46,376 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=9c3de9391b88a722a7ac6bd0ad977d1d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46215,1690240224735 2023-07-24 23:10:46,376 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1690240245742.f6c0e544daaf362d155e8db195223970.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690240246375"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240246375"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240246375"}]},"ts":"1690240246375"} 2023-07-24 23:10:46,376 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690240245742.9c3de9391b88a722a7ac6bd0ad977d1d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690240246376"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240246376"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240246376"}]},"ts":"1690240246376"} 2023-07-24 23:10:46,376 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=ae065852d9cf91abfaab0c45904f11e3, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42429,1690240220974 2023-07-24 23:10:46,376 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1690240245742.ae065852d9cf91abfaab0c45904f11e3.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690240246376"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240246376"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240246376"}]},"ts":"1690240246376"} 2023-07-24 23:10:46,377 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=149, ppid=146, state=RUNNABLE; CloseRegionProcedure d0eea26678a47a60b1f7e8952a5886d1, server=jenkins-hbase4.apache.org,42429,1690240220974}] 2023-07-24 23:10:46,378 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=150, ppid=144, state=RUNNABLE; CloseRegionProcedure 7dadbaacb571422dbc56cfb5b7eed574, server=jenkins-hbase4.apache.org,46215,1690240224735}] 2023-07-24 23:10:46,378 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=151, ppid=145, state=RUNNABLE; CloseRegionProcedure f6c0e544daaf362d155e8db195223970, server=jenkins-hbase4.apache.org,42429,1690240220974}] 2023-07-24 23:10:46,379 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=152, ppid=147, state=RUNNABLE; CloseRegionProcedure 9c3de9391b88a722a7ac6bd0ad977d1d, server=jenkins-hbase4.apache.org,46215,1690240224735}] 2023-07-24 23:10:46,380 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=153, ppid=148, state=RUNNABLE; CloseRegionProcedure ae065852d9cf91abfaab0c45904f11e3, server=jenkins-hbase4.apache.org,42429,1690240220974}] 2023-07-24 23:10:46,468 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-24 23:10:46,531 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f6c0e544daaf362d155e8db195223970 2023-07-24 23:10:46,533 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 7dadbaacb571422dbc56cfb5b7eed574 2023-07-24 23:10:46,533 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f6c0e544daaf362d155e8db195223970, disabling compactions & flushes 2023-07-24 23:10:46,534 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7dadbaacb571422dbc56cfb5b7eed574, disabling compactions & flushes 2023-07-24 23:10:46,534 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1690240245742.f6c0e544daaf362d155e8db195223970. 2023-07-24 23:10:46,534 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1690240245742.7dadbaacb571422dbc56cfb5b7eed574. 2023-07-24 23:10:46,534 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1690240245742.7dadbaacb571422dbc56cfb5b7eed574. 2023-07-24 23:10:46,534 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1690240245742.f6c0e544daaf362d155e8db195223970. 2023-07-24 23:10:46,534 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1690240245742.7dadbaacb571422dbc56cfb5b7eed574. after waiting 0 ms 2023-07-24 23:10:46,534 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1690240245742.f6c0e544daaf362d155e8db195223970. after waiting 0 ms 2023-07-24 23:10:46,535 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1690240245742.7dadbaacb571422dbc56cfb5b7eed574. 2023-07-24 23:10:46,535 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1690240245742.f6c0e544daaf362d155e8db195223970. 2023-07-24 23:10:46,542 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testDisabledTableMove/7dadbaacb571422dbc56cfb5b7eed574/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 23:10:46,542 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testDisabledTableMove/f6c0e544daaf362d155e8db195223970/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 23:10:46,542 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1690240245742.7dadbaacb571422dbc56cfb5b7eed574. 2023-07-24 23:10:46,542 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7dadbaacb571422dbc56cfb5b7eed574: 2023-07-24 23:10:46,543 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1690240245742.f6c0e544daaf362d155e8db195223970. 2023-07-24 23:10:46,543 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f6c0e544daaf362d155e8db195223970: 2023-07-24 23:10:46,544 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 7dadbaacb571422dbc56cfb5b7eed574 2023-07-24 23:10:46,544 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9c3de9391b88a722a7ac6bd0ad977d1d 2023-07-24 23:10:46,546 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9c3de9391b88a722a7ac6bd0ad977d1d, disabling compactions & flushes 2023-07-24 23:10:46,546 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690240245742.9c3de9391b88a722a7ac6bd0ad977d1d. 2023-07-24 23:10:46,546 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690240245742.9c3de9391b88a722a7ac6bd0ad977d1d. 2023-07-24 23:10:46,546 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690240245742.9c3de9391b88a722a7ac6bd0ad977d1d. after waiting 0 ms 2023-07-24 23:10:46,546 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690240245742.9c3de9391b88a722a7ac6bd0ad977d1d. 2023-07-24 23:10:46,546 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=7dadbaacb571422dbc56cfb5b7eed574, regionState=CLOSED 2023-07-24 23:10:46,546 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1690240245742.7dadbaacb571422dbc56cfb5b7eed574.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690240246546"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240246546"}]},"ts":"1690240246546"} 2023-07-24 23:10:46,546 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f6c0e544daaf362d155e8db195223970 2023-07-24 23:10:46,547 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close d0eea26678a47a60b1f7e8952a5886d1 2023-07-24 23:10:46,548 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d0eea26678a47a60b1f7e8952a5886d1, disabling compactions & flushes 2023-07-24 23:10:46,548 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1690240245742.d0eea26678a47a60b1f7e8952a5886d1. 2023-07-24 23:10:46,548 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1690240245742.d0eea26678a47a60b1f7e8952a5886d1. 2023-07-24 23:10:46,548 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1690240245742.d0eea26678a47a60b1f7e8952a5886d1. after waiting 0 ms 2023-07-24 23:10:46,548 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1690240245742.d0eea26678a47a60b1f7e8952a5886d1. 2023-07-24 23:10:46,559 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=f6c0e544daaf362d155e8db195223970, regionState=CLOSED 2023-07-24 23:10:46,559 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1690240245742.f6c0e544daaf362d155e8db195223970.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690240246558"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240246558"}]},"ts":"1690240246558"} 2023-07-24 23:10:46,559 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testDisabledTableMove/9c3de9391b88a722a7ac6bd0ad977d1d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 23:10:46,560 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690240245742.9c3de9391b88a722a7ac6bd0ad977d1d. 2023-07-24 23:10:46,560 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9c3de9391b88a722a7ac6bd0ad977d1d: 2023-07-24 23:10:46,561 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=150, resume processing ppid=144 2023-07-24 23:10:46,562 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=150, ppid=144, state=SUCCESS; CloseRegionProcedure 7dadbaacb571422dbc56cfb5b7eed574, server=jenkins-hbase4.apache.org,46215,1690240224735 in 173 msec 2023-07-24 23:10:46,562 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9c3de9391b88a722a7ac6bd0ad977d1d 2023-07-24 23:10:46,563 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=9c3de9391b88a722a7ac6bd0ad977d1d, regionState=CLOSED 2023-07-24 23:10:46,563 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690240245742.9c3de9391b88a722a7ac6bd0ad977d1d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690240246563"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240246563"}]},"ts":"1690240246563"} 2023-07-24 23:10:46,564 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testDisabledTableMove/d0eea26678a47a60b1f7e8952a5886d1/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 23:10:46,565 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1690240245742.d0eea26678a47a60b1f7e8952a5886d1. 2023-07-24 23:10:46,565 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d0eea26678a47a60b1f7e8952a5886d1: 2023-07-24 23:10:46,565 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=144, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7dadbaacb571422dbc56cfb5b7eed574, UNASSIGN in 190 msec 2023-07-24 23:10:46,565 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=151, resume processing ppid=145 2023-07-24 23:10:46,565 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=151, ppid=145, state=SUCCESS; CloseRegionProcedure f6c0e544daaf362d155e8db195223970, server=jenkins-hbase4.apache.org,42429,1690240220974 in 183 msec 2023-07-24 23:10:46,567 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed d0eea26678a47a60b1f7e8952a5886d1 2023-07-24 23:10:46,568 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close ae065852d9cf91abfaab0c45904f11e3 2023-07-24 23:10:46,570 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=145, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f6c0e544daaf362d155e8db195223970, UNASSIGN in 193 msec 2023-07-24 23:10:46,570 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=146 updating hbase:meta row=d0eea26678a47a60b1f7e8952a5886d1, regionState=CLOSED 2023-07-24 23:10:46,570 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690240245742.d0eea26678a47a60b1f7e8952a5886d1.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690240246570"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240246570"}]},"ts":"1690240246570"} 2023-07-24 23:10:46,571 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=152, resume processing ppid=147 2023-07-24 23:10:46,571 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=152, ppid=147, state=SUCCESS; CloseRegionProcedure 9c3de9391b88a722a7ac6bd0ad977d1d, server=jenkins-hbase4.apache.org,46215,1690240224735 in 187 msec 2023-07-24 23:10:46,572 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=147, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=9c3de9391b88a722a7ac6bd0ad977d1d, UNASSIGN in 199 msec 2023-07-24 23:10:46,575 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ae065852d9cf91abfaab0c45904f11e3, disabling compactions & flushes 2023-07-24 23:10:46,575 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1690240245742.ae065852d9cf91abfaab0c45904f11e3. 2023-07-24 23:10:46,575 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1690240245742.ae065852d9cf91abfaab0c45904f11e3. 2023-07-24 23:10:46,575 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1690240245742.ae065852d9cf91abfaab0c45904f11e3. after waiting 0 ms 2023-07-24 23:10:46,575 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1690240245742.ae065852d9cf91abfaab0c45904f11e3. 2023-07-24 23:10:46,581 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=149, resume processing ppid=146 2023-07-24 23:10:46,582 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=149, ppid=146, state=SUCCESS; CloseRegionProcedure d0eea26678a47a60b1f7e8952a5886d1, server=jenkins-hbase4.apache.org,42429,1690240220974 in 203 msec 2023-07-24 23:10:46,583 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=146, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=d0eea26678a47a60b1f7e8952a5886d1, UNASSIGN in 210 msec 2023-07-24 23:10:46,584 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/Group_testDisabledTableMove/ae065852d9cf91abfaab0c45904f11e3/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 23:10:46,584 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1690240245742.ae065852d9cf91abfaab0c45904f11e3. 2023-07-24 23:10:46,584 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ae065852d9cf91abfaab0c45904f11e3: 2023-07-24 23:10:46,585 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed ae065852d9cf91abfaab0c45904f11e3 2023-07-24 23:10:46,586 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=ae065852d9cf91abfaab0c45904f11e3, regionState=CLOSED 2023-07-24 23:10:46,586 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1690240245742.ae065852d9cf91abfaab0c45904f11e3.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690240246586"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240246586"}]},"ts":"1690240246586"} 2023-07-24 23:10:46,588 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=153, resume processing ppid=148 2023-07-24 23:10:46,588 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=153, ppid=148, state=SUCCESS; CloseRegionProcedure ae065852d9cf91abfaab0c45904f11e3, server=jenkins-hbase4.apache.org,42429,1690240220974 in 207 msec 2023-07-24 23:10:46,590 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=148, resume processing ppid=143 2023-07-24 23:10:46,590 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=148, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ae065852d9cf91abfaab0c45904f11e3, UNASSIGN in 216 msec 2023-07-24 23:10:46,590 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690240246590"}]},"ts":"1690240246590"} 2023-07-24 23:10:46,592 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLED in hbase:meta 2023-07-24 23:10:46,593 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set Group_testDisabledTableMove to state=DISABLED 2023-07-24 23:10:46,594 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=143, state=SUCCESS; DisableTableProcedure table=Group_testDisabledTableMove in 231 msec 2023-07-24 23:10:46,669 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-24 23:10:46,670 INFO [Listener at localhost/39785] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testDisabledTableMove, procId: 143 completed 2023-07-24 23:10:46,670 INFO [Listener at localhost/39785] rsgroup.TestRSGroupsAdmin1(370): Moving table Group_testDisabledTableMove to Group_testDisabledTableMove_1788690509 2023-07-24 23:10:46,672 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testDisabledTableMove] to rsgroup Group_testDisabledTableMove_1788690509 2023-07-24 23:10:46,675 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:46,676 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:46,677 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1788690509 2023-07-24 23:10:46,677 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 23:10:46,679 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(336): Skipping move regions because the table Group_testDisabledTableMove is disabled 2023-07-24 23:10:46,680 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_1788690509, current retry=0 2023-07-24 23:10:46,680 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testDisabledTableMove] moved to target group Group_testDisabledTableMove_1788690509. 2023-07-24 23:10:46,680 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 23:10:46,685 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:46,685 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:46,688 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-24 23:10:46,688 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 23:10:46,691 INFO [Listener at localhost/39785] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-24 23:10:46,691 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-24 23:10:46,692 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove at org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.preflightChecks(AbstractStateMachineTableProcedure.java:163) at org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.(DisableTableProcedure.java:78) at org.apache.hadoop.hbase.master.HMaster$11.run(HMaster.java:2429) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.disableTable(HMaster.java:2413) at org.apache.hadoop.hbase.master.MasterRpcServices.disableTable(MasterRpcServices.java:787) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 23:10:46,692 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.CallRunner(144): callId: 921 service: MasterService methodName: DisableTable size: 89 connection: 172.31.14.131:34864 deadline: 1690240306691, exception=org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove 2023-07-24 23:10:46,693 DEBUG [Listener at localhost/39785] hbase.HBaseTestingUtility(1826): Table: Group_testDisabledTableMove already disabled, so just deleting it. 2023-07-24 23:10:46,693 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testDisabledTableMove 2023-07-24 23:10:46,694 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] procedure2.ProcedureExecutor(1029): Stored pid=155, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-24 23:10:46,698 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=155, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-24 23:10:46,698 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testDisabledTableMove' from rsgroup 'Group_testDisabledTableMove_1788690509' 2023-07-24 23:10:46,699 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=155, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-24 23:10:46,707 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:46,708 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:46,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1788690509 2023-07-24 23:10:46,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 23:10:46,714 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testDisabledTableMove/7dadbaacb571422dbc56cfb5b7eed574 2023-07-24 23:10:46,714 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testDisabledTableMove/f6c0e544daaf362d155e8db195223970 2023-07-24 23:10:46,714 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testDisabledTableMove/ae065852d9cf91abfaab0c45904f11e3 2023-07-24 23:10:46,714 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=155 2023-07-24 23:10:46,714 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testDisabledTableMove/9c3de9391b88a722a7ac6bd0ad977d1d 2023-07-24 23:10:46,714 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testDisabledTableMove/d0eea26678a47a60b1f7e8952a5886d1 2023-07-24 23:10:46,718 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testDisabledTableMove/d0eea26678a47a60b1f7e8952a5886d1/f, FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testDisabledTableMove/d0eea26678a47a60b1f7e8952a5886d1/recovered.edits] 2023-07-24 23:10:46,718 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testDisabledTableMove/7dadbaacb571422dbc56cfb5b7eed574/f, FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testDisabledTableMove/7dadbaacb571422dbc56cfb5b7eed574/recovered.edits] 2023-07-24 23:10:46,718 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testDisabledTableMove/9c3de9391b88a722a7ac6bd0ad977d1d/f, FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testDisabledTableMove/9c3de9391b88a722a7ac6bd0ad977d1d/recovered.edits] 2023-07-24 23:10:46,718 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testDisabledTableMove/f6c0e544daaf362d155e8db195223970/f, FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testDisabledTableMove/f6c0e544daaf362d155e8db195223970/recovered.edits] 2023-07-24 23:10:46,719 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testDisabledTableMove/ae065852d9cf91abfaab0c45904f11e3/f, FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testDisabledTableMove/ae065852d9cf91abfaab0c45904f11e3/recovered.edits] 2023-07-24 23:10:46,730 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testDisabledTableMove/d0eea26678a47a60b1f7e8952a5886d1/recovered.edits/4.seqid to hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/archive/data/default/Group_testDisabledTableMove/d0eea26678a47a60b1f7e8952a5886d1/recovered.edits/4.seqid 2023-07-24 23:10:46,733 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testDisabledTableMove/9c3de9391b88a722a7ac6bd0ad977d1d/recovered.edits/4.seqid to hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/archive/data/default/Group_testDisabledTableMove/9c3de9391b88a722a7ac6bd0ad977d1d/recovered.edits/4.seqid 2023-07-24 23:10:46,733 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testDisabledTableMove/d0eea26678a47a60b1f7e8952a5886d1 2023-07-24 23:10:46,734 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testDisabledTableMove/ae065852d9cf91abfaab0c45904f11e3/recovered.edits/4.seqid to hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/archive/data/default/Group_testDisabledTableMove/ae065852d9cf91abfaab0c45904f11e3/recovered.edits/4.seqid 2023-07-24 23:10:46,734 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testDisabledTableMove/f6c0e544daaf362d155e8db195223970/recovered.edits/4.seqid to hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/archive/data/default/Group_testDisabledTableMove/f6c0e544daaf362d155e8db195223970/recovered.edits/4.seqid 2023-07-24 23:10:46,734 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testDisabledTableMove/9c3de9391b88a722a7ac6bd0ad977d1d 2023-07-24 23:10:46,735 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testDisabledTableMove/7dadbaacb571422dbc56cfb5b7eed574/recovered.edits/4.seqid to hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/archive/data/default/Group_testDisabledTableMove/7dadbaacb571422dbc56cfb5b7eed574/recovered.edits/4.seqid 2023-07-24 23:10:46,736 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testDisabledTableMove/ae065852d9cf91abfaab0c45904f11e3 2023-07-24 23:10:46,736 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testDisabledTableMove/f6c0e544daaf362d155e8db195223970 2023-07-24 23:10:46,736 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/.tmp/data/default/Group_testDisabledTableMove/7dadbaacb571422dbc56cfb5b7eed574 2023-07-24 23:10:46,736 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-24 23:10:46,740 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=155, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-24 23:10:46,742 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testDisabledTableMove from hbase:meta 2023-07-24 23:10:46,749 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'Group_testDisabledTableMove' descriptor. 2023-07-24 23:10:46,751 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=155, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-24 23:10:46,751 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'Group_testDisabledTableMove' from region states. 2023-07-24 23:10:46,752 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,,1690240245742.7dadbaacb571422dbc56cfb5b7eed574.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690240246751"}]},"ts":"9223372036854775807"} 2023-07-24 23:10:46,752 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,aaaaa,1690240245742.f6c0e544daaf362d155e8db195223970.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690240246751"}]},"ts":"9223372036854775807"} 2023-07-24 23:10:46,752 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690240245742.d0eea26678a47a60b1f7e8952a5886d1.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690240246751"}]},"ts":"9223372036854775807"} 2023-07-24 23:10:46,752 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690240245742.9c3de9391b88a722a7ac6bd0ad977d1d.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690240246751"}]},"ts":"9223372036854775807"} 2023-07-24 23:10:46,752 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,zzzzz,1690240245742.ae065852d9cf91abfaab0c45904f11e3.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690240246751"}]},"ts":"9223372036854775807"} 2023-07-24 23:10:46,754 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-24 23:10:46,754 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 7dadbaacb571422dbc56cfb5b7eed574, NAME => 'Group_testDisabledTableMove,,1690240245742.7dadbaacb571422dbc56cfb5b7eed574.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => f6c0e544daaf362d155e8db195223970, NAME => 'Group_testDisabledTableMove,aaaaa,1690240245742.f6c0e544daaf362d155e8db195223970.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => d0eea26678a47a60b1f7e8952a5886d1, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1690240245742.d0eea26678a47a60b1f7e8952a5886d1.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 9c3de9391b88a722a7ac6bd0ad977d1d, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690240245742.9c3de9391b88a722a7ac6bd0ad977d1d.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => ae065852d9cf91abfaab0c45904f11e3, NAME => 'Group_testDisabledTableMove,zzzzz,1690240245742.ae065852d9cf91abfaab0c45904f11e3.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-24 23:10:46,755 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'Group_testDisabledTableMove' as deleted. 2023-07-24 23:10:46,755 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690240246755"}]},"ts":"9223372036854775807"} 2023-07-24 23:10:46,757 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table Group_testDisabledTableMove state from META 2023-07-24 23:10:46,759 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=155, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-24 23:10:46,760 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=155, state=SUCCESS; DeleteTableProcedure table=Group_testDisabledTableMove in 66 msec 2023-07-24 23:10:46,816 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(1230): Checking to see if procedure is done pid=155 2023-07-24 23:10:46,816 INFO [Listener at localhost/39785] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testDisabledTableMove, procId: 155 completed 2023-07-24 23:10:46,819 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:46,820 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:46,820 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 23:10:46,820 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 23:10:46,821 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 23:10:46,821 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 23:10:46,821 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 23:10:46,822 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 23:10:46,826 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:46,826 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1788690509 2023-07-24 23:10:46,826 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-24 23:10:46,829 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 23:10:46,830 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 23:10:46,830 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 23:10:46,830 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 23:10:46,831 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33649, jenkins-hbase4.apache.org:36981] to rsgroup default 2023-07-24 23:10:46,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:46,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1788690509 2023-07-24 23:10:46,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 23:10:46,835 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_1788690509, current retry=0 2023-07-24 23:10:46,835 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33649,1690240221185, jenkins-hbase4.apache.org,36981,1690240220580] are moved back to Group_testDisabledTableMove_1788690509 2023-07-24 23:10:46,835 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testDisabledTableMove_1788690509 => default 2023-07-24 23:10:46,835 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 23:10:46,836 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testDisabledTableMove_1788690509 2023-07-24 23:10:46,839 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:46,839 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 23:10:46,841 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 23:10:46,843 INFO [Listener at localhost/39785] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 23:10:46,844 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 23:10:46,846 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:46,846 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:46,847 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 23:10:46,849 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 23:10:46,852 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:46,852 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:46,854 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42959] to rsgroup master 2023-07-24 23:10:46,854 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 23:10:46,854 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.CallRunner(144): callId: 955 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34864 deadline: 1690241446854, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. 2023-07-24 23:10:46,854 WARN [Listener at localhost/39785] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 23:10:46,856 INFO [Listener at localhost/39785] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:46,857 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:46,857 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:46,857 INFO [Listener at localhost/39785] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33649, jenkins-hbase4.apache.org:36981, jenkins-hbase4.apache.org:42429, jenkins-hbase4.apache.org:46215], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 23:10:46,858 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 23:10:46,858 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:46,876 INFO [Listener at localhost/39785] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=514 (was 512) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-877025124_17 at /127.0.0.1:49240 [Waiting for operation #12] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-649929925_17 at /127.0.0.1:36928 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x45c34053-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63581179-shared-pool-26 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=792 (was 771) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=342 (was 342), ProcessCount=177 (was 177), AvailableMemoryMB=5745 (was 5749) 2023-07-24 23:10:46,876 WARN [Listener at localhost/39785] hbase.ResourceChecker(130): Thread=514 is superior to 500 2023-07-24 23:10:46,894 INFO [Listener at localhost/39785] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=514, OpenFileDescriptor=792, MaxFileDescriptor=60000, SystemLoadAverage=342, ProcessCount=177, AvailableMemoryMB=5744 2023-07-24 23:10:46,894 WARN [Listener at localhost/39785] hbase.ResourceChecker(130): Thread=514 is superior to 500 2023-07-24 23:10:46,894 INFO [Listener at localhost/39785] rsgroup.TestRSGroupsBase(132): testRSGroupListDoesNotContainFailedTableCreation 2023-07-24 23:10:46,898 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:46,898 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:46,899 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 23:10:46,899 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 23:10:46,899 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 23:10:46,900 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 23:10:46,900 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 23:10:46,900 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 23:10:46,904 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:46,904 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 23:10:46,906 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 23:10:46,908 INFO [Listener at localhost/39785] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 23:10:46,909 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 23:10:46,911 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:46,911 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:46,914 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 23:10:46,919 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 23:10:46,922 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:46,922 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:46,925 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42959] to rsgroup master 2023-07-24 23:10:46,925 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 23:10:46,925 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] ipc.CallRunner(144): callId: 983 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34864 deadline: 1690241446925, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. 2023-07-24 23:10:46,926 WARN [Listener at localhost/39785] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42959 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 23:10:46,927 INFO [Listener at localhost/39785] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:46,928 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:46,928 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:46,928 INFO [Listener at localhost/39785] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33649, jenkins-hbase4.apache.org:36981, jenkins-hbase4.apache.org:42429, jenkins-hbase4.apache.org:46215], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 23:10:46,929 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 23:10:46,929 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42959] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:46,929 INFO [Listener at localhost/39785] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-24 23:10:46,929 INFO [Listener at localhost/39785] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-24 23:10:46,930 DEBUG [Listener at localhost/39785] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x52c3922c to 127.0.0.1:59310 2023-07-24 23:10:46,930 DEBUG [Listener at localhost/39785] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 23:10:46,931 DEBUG [Listener at localhost/39785] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-24 23:10:46,931 DEBUG [Listener at localhost/39785] util.JVMClusterUtil(257): Found active master hash=1150221509, stopped=false 2023-07-24 23:10:46,932 DEBUG [Listener at localhost/39785] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-24 23:10:46,932 DEBUG [Listener at localhost/39785] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-24 23:10:46,932 INFO [Listener at localhost/39785] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,42959,1690240218606 2023-07-24 23:10:46,933 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): regionserver:42429-0x1019999755d0002, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 23:10:46,933 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): regionserver:33649-0x1019999755d0003, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 23:10:46,933 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): regionserver:46215-0x1019999755d000b, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 23:10:46,933 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): regionserver:36981-0x1019999755d0001, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 23:10:46,933 INFO [Listener at localhost/39785] procedure2.ProcedureExecutor(629): Stopping 2023-07-24 23:10:46,934 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 23:10:46,934 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 23:10:46,934 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33649-0x1019999755d0003, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 23:10:46,934 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36981-0x1019999755d0001, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 23:10:46,934 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46215-0x1019999755d000b, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 23:10:46,934 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42429-0x1019999755d0002, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 23:10:46,934 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 23:10:46,934 DEBUG [Listener at localhost/39785] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5970ea93 to 127.0.0.1:59310 2023-07-24 23:10:46,935 DEBUG [Listener at localhost/39785] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 23:10:46,935 INFO [Listener at localhost/39785] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,36981,1690240220580' ***** 2023-07-24 23:10:46,935 INFO [Listener at localhost/39785] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 23:10:46,935 INFO [Listener at localhost/39785] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,42429,1690240220974' ***** 2023-07-24 23:10:46,935 INFO [Listener at localhost/39785] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 23:10:46,935 INFO [RS:0;jenkins-hbase4:36981] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 23:10:46,935 INFO [RS:1;jenkins-hbase4:42429] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 23:10:46,935 INFO [Listener at localhost/39785] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,33649,1690240221185' ***** 2023-07-24 23:10:46,939 INFO [Listener at localhost/39785] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 23:10:46,941 INFO [Listener at localhost/39785] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,46215,1690240224735' ***** 2023-07-24 23:10:46,941 INFO [RS:2;jenkins-hbase4:33649] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 23:10:46,941 INFO [Listener at localhost/39785] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 23:10:46,941 INFO [RS:3;jenkins-hbase4:46215] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 23:10:46,941 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-24 23:10:46,941 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-24 23:10:46,959 INFO [RS:0;jenkins-hbase4:36981] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@333a682e{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 23:10:46,959 INFO [RS:1;jenkins-hbase4:42429] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@659da4f9{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 23:10:46,959 INFO [RS:3;jenkins-hbase4:46215] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4783a073{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 23:10:46,959 INFO [RS:2;jenkins-hbase4:33649] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1001ad12{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 23:10:46,965 INFO [RS:3;jenkins-hbase4:46215] server.AbstractConnector(383): Stopped ServerConnector@143940fe{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 23:10:46,965 INFO [RS:3;jenkins-hbase4:46215] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 23:10:46,966 INFO [RS:3;jenkins-hbase4:46215] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2ebea3f1{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 23:10:46,968 INFO [RS:3;jenkins-hbase4:46215] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2b476413{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/hadoop.log.dir/,STOPPED} 2023-07-24 23:10:46,969 INFO [RS:1;jenkins-hbase4:42429] server.AbstractConnector(383): Stopped ServerConnector@71b1cadf{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 23:10:46,970 INFO [RS:1;jenkins-hbase4:42429] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 23:10:46,969 INFO [RS:0;jenkins-hbase4:36981] server.AbstractConnector(383): Stopped ServerConnector@32ff9d35{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 23:10:46,971 INFO [RS:0;jenkins-hbase4:36981] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 23:10:46,971 INFO [RS:2;jenkins-hbase4:33649] server.AbstractConnector(383): Stopped ServerConnector@44a90bfb{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 23:10:46,972 INFO [RS:2;jenkins-hbase4:33649] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 23:10:46,972 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 23:10:46,971 INFO [RS:1;jenkins-hbase4:42429] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6e9c82{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 23:10:46,973 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 23:10:46,973 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 23:10:46,973 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 23:10:46,973 INFO [RS:2;jenkins-hbase4:33649] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7756df1d{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 23:10:46,973 INFO [RS:0;jenkins-hbase4:36981] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3e03ec49{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 23:10:46,975 INFO [RS:3;jenkins-hbase4:46215] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 23:10:46,972 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 23:10:46,975 INFO [RS:3;jenkins-hbase4:46215] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 23:10:46,975 INFO [RS:3;jenkins-hbase4:46215] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 23:10:46,975 INFO [RS:1;jenkins-hbase4:42429] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@17d42069{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/hadoop.log.dir/,STOPPED} 2023-07-24 23:10:46,975 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 23:10:46,976 INFO [RS:2;jenkins-hbase4:33649] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@ebb434d{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/hadoop.log.dir/,STOPPED} 2023-07-24 23:10:46,975 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 23:10:46,976 INFO [RS:0;jenkins-hbase4:36981] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@30b14bcd{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/hadoop.log.dir/,STOPPED} 2023-07-24 23:10:46,976 INFO [RS:3;jenkins-hbase4:46215] regionserver.HRegionServer(3305): Received CLOSE for 386ba32f0c3b0408cdca5a4ed5ced8e4 2023-07-24 23:10:46,977 INFO [RS:1;jenkins-hbase4:42429] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 23:10:46,977 INFO [RS:2;jenkins-hbase4:33649] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 23:10:46,977 INFO [RS:2;jenkins-hbase4:33649] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 23:10:46,977 INFO [RS:2;jenkins-hbase4:33649] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 23:10:46,978 INFO [RS:2;jenkins-hbase4:33649] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33649,1690240221185 2023-07-24 23:10:46,978 DEBUG [RS:2;jenkins-hbase4:33649] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x42022bb8 to 127.0.0.1:59310 2023-07-24 23:10:46,978 INFO [RS:3;jenkins-hbase4:46215] regionserver.HRegionServer(3305): Received CLOSE for c59756cef5ea3b9231917a64964f5e23 2023-07-24 23:10:46,978 INFO [RS:0;jenkins-hbase4:36981] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 23:10:46,978 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 386ba32f0c3b0408cdca5a4ed5ced8e4, disabling compactions & flushes 2023-07-24 23:10:46,978 INFO [RS:0;jenkins-hbase4:36981] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 23:10:46,978 INFO [RS:0;jenkins-hbase4:36981] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 23:10:46,978 INFO [RS:0;jenkins-hbase4:36981] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,36981,1690240220580 2023-07-24 23:10:46,978 DEBUG [RS:0;jenkins-hbase4:36981] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7c50fe2d to 127.0.0.1:59310 2023-07-24 23:10:46,978 DEBUG [RS:0;jenkins-hbase4:36981] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 23:10:46,979 INFO [RS:0;jenkins-hbase4:36981] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,36981,1690240220580; all regions closed. 2023-07-24 23:10:46,978 DEBUG [RS:2;jenkins-hbase4:33649] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 23:10:46,980 INFO [RS:2;jenkins-hbase4:33649] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33649,1690240221185; all regions closed. 2023-07-24 23:10:46,978 INFO [RS:1;jenkins-hbase4:42429] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 23:10:46,978 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690240223833.386ba32f0c3b0408cdca5a4ed5ced8e4. 2023-07-24 23:10:46,980 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690240223833.386ba32f0c3b0408cdca5a4ed5ced8e4. 2023-07-24 23:10:46,980 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690240223833.386ba32f0c3b0408cdca5a4ed5ced8e4. after waiting 0 ms 2023-07-24 23:10:46,980 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690240223833.386ba32f0c3b0408cdca5a4ed5ced8e4. 2023-07-24 23:10:46,980 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 386ba32f0c3b0408cdca5a4ed5ced8e4 1/1 column families, dataSize=22.11 KB heapSize=36.49 KB 2023-07-24 23:10:46,978 INFO [RS:3;jenkins-hbase4:46215] regionserver.HRegionServer(3305): Received CLOSE for 07163ab4ec4541d8899adbf059caab34 2023-07-24 23:10:46,980 INFO [RS:1;jenkins-hbase4:42429] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 23:10:46,980 INFO [RS:3;jenkins-hbase4:46215] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46215,1690240224735 2023-07-24 23:10:46,981 INFO [RS:1;jenkins-hbase4:42429] regionserver.HRegionServer(3305): Received CLOSE for e5172a504c1b9d74aaf33c65006a1502 2023-07-24 23:10:46,981 INFO [RS:1;jenkins-hbase4:42429] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,42429,1690240220974 2023-07-24 23:10:46,981 DEBUG [RS:3;jenkins-hbase4:46215] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1c443cd9 to 127.0.0.1:59310 2023-07-24 23:10:46,981 DEBUG [RS:3;jenkins-hbase4:46215] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 23:10:46,981 DEBUG [RS:1;jenkins-hbase4:42429] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x58697212 to 127.0.0.1:59310 2023-07-24 23:10:46,981 DEBUG [RS:1;jenkins-hbase4:42429] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 23:10:46,981 INFO [RS:1;jenkins-hbase4:42429] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-24 23:10:46,981 INFO [RS:3;jenkins-hbase4:46215] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 23:10:46,981 DEBUG [RS:1;jenkins-hbase4:42429] regionserver.HRegionServer(1478): Online Regions={e5172a504c1b9d74aaf33c65006a1502=testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502.} 2023-07-24 23:10:46,981 INFO [RS:3;jenkins-hbase4:46215] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 23:10:46,981 INFO [RS:3;jenkins-hbase4:46215] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 23:10:46,982 INFO [RS:3;jenkins-hbase4:46215] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-24 23:10:46,986 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 23:10:46,992 DEBUG [RS:1;jenkins-hbase4:42429] regionserver.HRegionServer(1504): Waiting on e5172a504c1b9d74aaf33c65006a1502 2023-07-24 23:10:46,997 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e5172a504c1b9d74aaf33c65006a1502, disabling compactions & flushes 2023-07-24 23:10:46,997 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502. 2023-07-24 23:10:46,997 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502. 2023-07-24 23:10:46,997 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502. after waiting 0 ms 2023-07-24 23:10:46,997 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502. 2023-07-24 23:10:46,998 INFO [RS:3;jenkins-hbase4:46215] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-24 23:10:46,998 DEBUG [RS:3;jenkins-hbase4:46215] regionserver.HRegionServer(1478): Online Regions={386ba32f0c3b0408cdca5a4ed5ced8e4=hbase:rsgroup,,1690240223833.386ba32f0c3b0408cdca5a4ed5ced8e4., 1588230740=hbase:meta,,1.1588230740, c59756cef5ea3b9231917a64964f5e23=hbase:namespace,,1690240223673.c59756cef5ea3b9231917a64964f5e23., 07163ab4ec4541d8899adbf059caab34=unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34.} 2023-07-24 23:10:46,998 DEBUG [RS:3;jenkins-hbase4:46215] regionserver.HRegionServer(1504): Waiting on 07163ab4ec4541d8899adbf059caab34, 1588230740, 386ba32f0c3b0408cdca5a4ed5ced8e4, c59756cef5ea3b9231917a64964f5e23 2023-07-24 23:10:46,998 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-24 23:10:46,998 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-24 23:10:46,998 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-24 23:10:46,998 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-24 23:10:46,998 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-24 23:10:46,998 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=76.95 KB heapSize=121.08 KB 2023-07-24 23:10:47,020 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/testRename/e5172a504c1b9d74aaf33c65006a1502/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-24 23:10:47,021 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502. 2023-07-24 23:10:47,021 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e5172a504c1b9d74aaf33c65006a1502: 2023-07-24 23:10:47,022 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed testRename,,1690240240086.e5172a504c1b9d74aaf33c65006a1502. 2023-07-24 23:10:47,034 DEBUG [RS:2;jenkins-hbase4:33649] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/oldWALs 2023-07-24 23:10:47,034 INFO [RS:2;jenkins-hbase4:33649] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C33649%2C1690240221185.meta:.meta(num 1690240223396) 2023-07-24 23:10:47,034 DEBUG [RS:0;jenkins-hbase4:36981] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/oldWALs 2023-07-24 23:10:47,034 INFO [RS:0;jenkins-hbase4:36981] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C36981%2C1690240220580:(num 1690240223062) 2023-07-24 23:10:47,034 DEBUG [RS:0;jenkins-hbase4:36981] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 23:10:47,034 INFO [RS:0;jenkins-hbase4:36981] regionserver.LeaseManager(133): Closed leases 2023-07-24 23:10:47,058 INFO [RS:0;jenkins-hbase4:36981] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-24 23:10:47,058 INFO [RS:0;jenkins-hbase4:36981] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 23:10:47,058 INFO [RS:0;jenkins-hbase4:36981] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 23:10:47,059 INFO [RS:0;jenkins-hbase4:36981] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 23:10:47,058 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 23:10:47,061 INFO [RS:0;jenkins-hbase4:36981] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:36981 2023-07-24 23:10:47,073 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 23:10:47,073 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): regionserver:36981-0x1019999755d0001, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36981,1690240220580 2023-07-24 23:10:47,073 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): regionserver:33649-0x1019999755d0003, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36981,1690240220580 2023-07-24 23:10:47,073 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): regionserver:46215-0x1019999755d000b, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36981,1690240220580 2023-07-24 23:10:47,073 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): regionserver:33649-0x1019999755d0003, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 23:10:47,073 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): regionserver:42429-0x1019999755d0002, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36981,1690240220580 2023-07-24 23:10:47,073 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): regionserver:36981-0x1019999755d0001, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 23:10:47,073 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): regionserver:42429-0x1019999755d0002, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 23:10:47,073 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): regionserver:46215-0x1019999755d000b, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 23:10:47,074 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,36981,1690240220580] 2023-07-24 23:10:47,074 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,36981,1690240220580; numProcessing=1 2023-07-24 23:10:47,075 DEBUG [RS:2;jenkins-hbase4:33649] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/oldWALs 2023-07-24 23:10:47,075 INFO [RS:2;jenkins-hbase4:33649] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C33649%2C1690240221185:(num 1690240223065) 2023-07-24 23:10:47,075 DEBUG [RS:2;jenkins-hbase4:33649] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 23:10:47,075 INFO [RS:2;jenkins-hbase4:33649] regionserver.LeaseManager(133): Closed leases 2023-07-24 23:10:47,075 INFO [RS:2;jenkins-hbase4:33649] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-24 23:10:47,077 INFO [RS:2;jenkins-hbase4:33649] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 23:10:47,077 INFO [RS:2;jenkins-hbase4:33649] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 23:10:47,077 INFO [RS:2;jenkins-hbase4:33649] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 23:10:47,077 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 23:10:47,077 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,36981,1690240220580 already deleted, retry=false 2023-07-24 23:10:47,077 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,36981,1690240220580 expired; onlineServers=3 2023-07-24 23:10:47,083 INFO [RS:2;jenkins-hbase4:33649] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33649 2023-07-24 23:10:47,091 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=71.14 KB at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/meta/1588230740/.tmp/info/1943abc38303488c880e0b2de253f9bd 2023-07-24 23:10:47,099 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1943abc38303488c880e0b2de253f9bd 2023-07-24 23:10:47,102 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-24 23:10:47,102 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-24 23:10:47,135 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2 KB at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/meta/1588230740/.tmp/rep_barrier/3e1348328548460686231111f5abaa66 2023-07-24 23:10:47,145 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3e1348328548460686231111f5abaa66 2023-07-24 23:10:47,176 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): regionserver:36981-0x1019999755d0001, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 23:10:47,176 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): regionserver:36981-0x1019999755d0001, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 23:10:47,176 INFO [RS:0;jenkins-hbase4:36981] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,36981,1690240220580; zookeeper connection closed. 2023-07-24 23:10:47,176 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@7b9802f9] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@7b9802f9 2023-07-24 23:10:47,177 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.81 KB at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/meta/1588230740/.tmp/table/82c09072731c40ffb44bbe8c092ef0a2 2023-07-24 23:10:47,177 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): regionserver:46215-0x1019999755d000b, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33649,1690240221185 2023-07-24 23:10:47,177 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 23:10:47,177 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): regionserver:33649-0x1019999755d0003, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33649,1690240221185 2023-07-24 23:10:47,177 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): regionserver:42429-0x1019999755d0002, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33649,1690240221185 2023-07-24 23:10:47,178 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,33649,1690240221185] 2023-07-24 23:10:47,179 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,33649,1690240221185; numProcessing=2 2023-07-24 23:10:47,183 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 82c09072731c40ffb44bbe8c092ef0a2 2023-07-24 23:10:47,185 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/meta/1588230740/.tmp/info/1943abc38303488c880e0b2de253f9bd as hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/meta/1588230740/info/1943abc38303488c880e0b2de253f9bd 2023-07-24 23:10:47,191 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1943abc38303488c880e0b2de253f9bd 2023-07-24 23:10:47,191 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/meta/1588230740/info/1943abc38303488c880e0b2de253f9bd, entries=94, sequenceid=210, filesize=15.6 K 2023-07-24 23:10:47,192 INFO [RS:1;jenkins-hbase4:42429] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,42429,1690240220974; all regions closed. 2023-07-24 23:10:47,198 DEBUG [RS:3;jenkins-hbase4:46215] regionserver.HRegionServer(1504): Waiting on 07163ab4ec4541d8899adbf059caab34, 1588230740, 386ba32f0c3b0408cdca5a4ed5ced8e4, c59756cef5ea3b9231917a64964f5e23 2023-07-24 23:10:47,200 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/meta/1588230740/.tmp/rep_barrier/3e1348328548460686231111f5abaa66 as hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/meta/1588230740/rep_barrier/3e1348328548460686231111f5abaa66 2023-07-24 23:10:47,215 DEBUG [RS:1;jenkins-hbase4:42429] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/oldWALs 2023-07-24 23:10:47,215 INFO [RS:1;jenkins-hbase4:42429] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C42429%2C1690240220974:(num 1690240223062) 2023-07-24 23:10:47,215 DEBUG [RS:1;jenkins-hbase4:42429] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 23:10:47,215 INFO [RS:1;jenkins-hbase4:42429] regionserver.LeaseManager(133): Closed leases 2023-07-24 23:10:47,219 INFO [RS:1;jenkins-hbase4:42429] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-24 23:10:47,219 INFO [RS:1;jenkins-hbase4:42429] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 23:10:47,219 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 23:10:47,219 INFO [RS:1;jenkins-hbase4:42429] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 23:10:47,219 INFO [RS:1;jenkins-hbase4:42429] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 23:10:47,220 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3e1348328548460686231111f5abaa66 2023-07-24 23:10:47,220 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/meta/1588230740/rep_barrier/3e1348328548460686231111f5abaa66, entries=18, sequenceid=210, filesize=6.9 K 2023-07-24 23:10:47,221 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/meta/1588230740/.tmp/table/82c09072731c40ffb44bbe8c092ef0a2 as hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/meta/1588230740/table/82c09072731c40ffb44bbe8c092ef0a2 2023-07-24 23:10:47,223 INFO [RS:1;jenkins-hbase4:42429] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:42429 2023-07-24 23:10:47,230 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 82c09072731c40ffb44bbe8c092ef0a2 2023-07-24 23:10:47,230 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/meta/1588230740/table/82c09072731c40ffb44bbe8c092ef0a2, entries=27, sequenceid=210, filesize=7.2 K 2023-07-24 23:10:47,231 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~76.95 KB/78793, heapSize ~121.03 KB/123936, currentSize=0 B/0 for 1588230740 in 233ms, sequenceid=210, compaction requested=false 2023-07-24 23:10:47,253 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/meta/1588230740/recovered.edits/213.seqid, newMaxSeqId=213, maxSeqId=17 2023-07-24 23:10:47,254 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 23:10:47,255 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-24 23:10:47,255 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-24 23:10:47,255 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-24 23:10:47,279 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): regionserver:33649-0x1019999755d0003, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 23:10:47,279 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): regionserver:33649-0x1019999755d0003, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 23:10:47,279 INFO [RS:2;jenkins-hbase4:33649] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33649,1690240221185; zookeeper connection closed. 2023-07-24 23:10:47,279 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4beae836] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4beae836 2023-07-24 23:10:47,280 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): regionserver:42429-0x1019999755d0002, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42429,1690240220974 2023-07-24 23:10:47,280 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): regionserver:46215-0x1019999755d000b, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42429,1690240220974 2023-07-24 23:10:47,280 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 23:10:47,282 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,33649,1690240221185 already deleted, retry=false 2023-07-24 23:10:47,282 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,33649,1690240221185 expired; onlineServers=2 2023-07-24 23:10:47,290 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,42429,1690240220974] 2023-07-24 23:10:47,291 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,42429,1690240220974; numProcessing=3 2023-07-24 23:10:47,292 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,42429,1690240220974 already deleted, retry=false 2023-07-24 23:10:47,292 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,42429,1690240220974 expired; onlineServers=1 2023-07-24 23:10:47,399 DEBUG [RS:3;jenkins-hbase4:46215] regionserver.HRegionServer(1504): Waiting on 07163ab4ec4541d8899adbf059caab34, 386ba32f0c3b0408cdca5a4ed5ced8e4, c59756cef5ea3b9231917a64964f5e23 2023-07-24 23:10:47,472 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=22.11 KB at sequenceid=101 (bloomFilter=true), to=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/rsgroup/386ba32f0c3b0408cdca5a4ed5ced8e4/.tmp/m/fb06090d945d4d7e8608297b21885f39 2023-07-24 23:10:47,478 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for fb06090d945d4d7e8608297b21885f39 2023-07-24 23:10:47,479 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/rsgroup/386ba32f0c3b0408cdca5a4ed5ced8e4/.tmp/m/fb06090d945d4d7e8608297b21885f39 as hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/rsgroup/386ba32f0c3b0408cdca5a4ed5ced8e4/m/fb06090d945d4d7e8608297b21885f39 2023-07-24 23:10:47,485 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for fb06090d945d4d7e8608297b21885f39 2023-07-24 23:10:47,485 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/rsgroup/386ba32f0c3b0408cdca5a4ed5ced8e4/m/fb06090d945d4d7e8608297b21885f39, entries=22, sequenceid=101, filesize=5.9 K 2023-07-24 23:10:47,486 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~22.11 KB/22642, heapSize ~36.48 KB/37352, currentSize=0 B/0 for 386ba32f0c3b0408cdca5a4ed5ced8e4 in 506ms, sequenceid=101, compaction requested=false 2023-07-24 23:10:47,494 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/rsgroup/386ba32f0c3b0408cdca5a4ed5ced8e4/recovered.edits/104.seqid, newMaxSeqId=104, maxSeqId=29 2023-07-24 23:10:47,494 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 23:10:47,495 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690240223833.386ba32f0c3b0408cdca5a4ed5ced8e4. 2023-07-24 23:10:47,495 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 386ba32f0c3b0408cdca5a4ed5ced8e4: 2023-07-24 23:10:47,495 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1690240223833.386ba32f0c3b0408cdca5a4ed5ced8e4. 2023-07-24 23:10:47,495 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c59756cef5ea3b9231917a64964f5e23, disabling compactions & flushes 2023-07-24 23:10:47,495 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690240223673.c59756cef5ea3b9231917a64964f5e23. 2023-07-24 23:10:47,495 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690240223673.c59756cef5ea3b9231917a64964f5e23. 2023-07-24 23:10:47,495 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690240223673.c59756cef5ea3b9231917a64964f5e23. after waiting 0 ms 2023-07-24 23:10:47,495 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690240223673.c59756cef5ea3b9231917a64964f5e23. 2023-07-24 23:10:47,500 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/hbase/namespace/c59756cef5ea3b9231917a64964f5e23/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=9 2023-07-24 23:10:47,501 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690240223673.c59756cef5ea3b9231917a64964f5e23. 2023-07-24 23:10:47,501 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c59756cef5ea3b9231917a64964f5e23: 2023-07-24 23:10:47,501 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1690240223673.c59756cef5ea3b9231917a64964f5e23. 2023-07-24 23:10:47,502 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 07163ab4ec4541d8899adbf059caab34, disabling compactions & flushes 2023-07-24 23:10:47,502 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34. 2023-07-24 23:10:47,502 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34. 2023-07-24 23:10:47,502 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34. after waiting 0 ms 2023-07-24 23:10:47,502 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34. 2023-07-24 23:10:47,505 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/data/default/unmovedTable/07163ab4ec4541d8899adbf059caab34/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-24 23:10:47,506 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34. 2023-07-24 23:10:47,506 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 07163ab4ec4541d8899adbf059caab34: 2023-07-24 23:10:47,506 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed unmovedTable,,1690240241752.07163ab4ec4541d8899adbf059caab34. 2023-07-24 23:10:47,599 INFO [RS:3;jenkins-hbase4:46215] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46215,1690240224735; all regions closed. 2023-07-24 23:10:47,604 DEBUG [RS:3;jenkins-hbase4:46215] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/oldWALs 2023-07-24 23:10:47,604 INFO [RS:3;jenkins-hbase4:46215] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46215%2C1690240224735.meta:.meta(num 1690240226042) 2023-07-24 23:10:47,608 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/WALs/jenkins-hbase4.apache.org,46215,1690240224735/jenkins-hbase4.apache.org%2C46215%2C1690240224735.1690240225210 not finished, retry = 0 2023-07-24 23:10:47,634 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): regionserver:42429-0x1019999755d0002, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 23:10:47,634 INFO [RS:1;jenkins-hbase4:42429] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,42429,1690240220974; zookeeper connection closed. 2023-07-24 23:10:47,635 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): regionserver:42429-0x1019999755d0002, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 23:10:47,635 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@612fc1ca] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@612fc1ca 2023-07-24 23:10:47,711 DEBUG [RS:3;jenkins-hbase4:46215] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/oldWALs 2023-07-24 23:10:47,711 INFO [RS:3;jenkins-hbase4:46215] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46215%2C1690240224735:(num 1690240225210) 2023-07-24 23:10:47,711 DEBUG [RS:3;jenkins-hbase4:46215] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 23:10:47,711 INFO [RS:3;jenkins-hbase4:46215] regionserver.LeaseManager(133): Closed leases 2023-07-24 23:10:47,711 INFO [RS:3;jenkins-hbase4:46215] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-24 23:10:47,711 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 23:10:47,712 INFO [RS:3;jenkins-hbase4:46215] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46215 2023-07-24 23:10:47,714 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): regionserver:46215-0x1019999755d000b, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46215,1690240224735 2023-07-24 23:10:47,714 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 23:10:47,715 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,46215,1690240224735] 2023-07-24 23:10:47,715 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,46215,1690240224735; numProcessing=4 2023-07-24 23:10:47,717 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,46215,1690240224735 already deleted, retry=false 2023-07-24 23:10:47,718 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,46215,1690240224735 expired; onlineServers=0 2023-07-24 23:10:47,718 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,42959,1690240218606' ***** 2023-07-24 23:10:47,718 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-24 23:10:47,718 DEBUG [M:0;jenkins-hbase4:42959] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5b3f6fe, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 23:10:47,719 INFO [M:0;jenkins-hbase4:42959] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 23:10:47,721 INFO [M:0;jenkins-hbase4:42959] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@5f9ed0a6{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-24 23:10:47,721 INFO [M:0;jenkins-hbase4:42959] server.AbstractConnector(383): Stopped ServerConnector@7d3cded5{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 23:10:47,721 INFO [M:0;jenkins-hbase4:42959] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 23:10:47,721 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-24 23:10:47,721 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 23:10:47,722 INFO [M:0;jenkins-hbase4:42959] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4ff95bf2{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 23:10:47,722 INFO [M:0;jenkins-hbase4:42959] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@48ee05fa{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/hadoop.log.dir/,STOPPED} 2023-07-24 23:10:47,723 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 23:10:47,723 INFO [M:0;jenkins-hbase4:42959] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,42959,1690240218606 2023-07-24 23:10:47,723 INFO [M:0;jenkins-hbase4:42959] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,42959,1690240218606; all regions closed. 2023-07-24 23:10:47,723 DEBUG [M:0;jenkins-hbase4:42959] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 23:10:47,723 INFO [M:0;jenkins-hbase4:42959] master.HMaster(1491): Stopping master jetty server 2023-07-24 23:10:47,724 INFO [M:0;jenkins-hbase4:42959] server.AbstractConnector(383): Stopped ServerConnector@30d25553{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 23:10:47,724 DEBUG [M:0;jenkins-hbase4:42959] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-24 23:10:47,724 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-24 23:10:47,724 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690240222684] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690240222684,5,FailOnTimeoutGroup] 2023-07-24 23:10:47,724 DEBUG [M:0;jenkins-hbase4:42959] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-24 23:10:47,724 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690240222683] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690240222683,5,FailOnTimeoutGroup] 2023-07-24 23:10:47,724 INFO [M:0;jenkins-hbase4:42959] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-24 23:10:47,724 INFO [M:0;jenkins-hbase4:42959] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-24 23:10:47,725 INFO [M:0;jenkins-hbase4:42959] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-24 23:10:47,725 DEBUG [M:0;jenkins-hbase4:42959] master.HMaster(1512): Stopping service threads 2023-07-24 23:10:47,725 INFO [M:0;jenkins-hbase4:42959] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-24 23:10:47,725 ERROR [M:0;jenkins-hbase4:42959] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-24 23:10:47,726 INFO [M:0;jenkins-hbase4:42959] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-24 23:10:47,726 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-24 23:10:47,726 DEBUG [M:0;jenkins-hbase4:42959] zookeeper.ZKUtil(398): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-24 23:10:47,726 WARN [M:0;jenkins-hbase4:42959] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-24 23:10:47,726 INFO [M:0;jenkins-hbase4:42959] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-24 23:10:47,727 INFO [M:0;jenkins-hbase4:42959] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-24 23:10:47,727 DEBUG [M:0;jenkins-hbase4:42959] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-24 23:10:47,727 INFO [M:0;jenkins-hbase4:42959] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 23:10:47,727 DEBUG [M:0;jenkins-hbase4:42959] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 23:10:47,727 DEBUG [M:0;jenkins-hbase4:42959] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-24 23:10:47,727 DEBUG [M:0;jenkins-hbase4:42959] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 23:10:47,727 INFO [M:0;jenkins-hbase4:42959] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=519.20 KB heapSize=621.31 KB 2023-07-24 23:10:47,747 INFO [M:0;jenkins-hbase4:42959] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=519.20 KB at sequenceid=1152 (bloomFilter=true), to=hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/645f392e463849b48c1602792d5afcaf 2023-07-24 23:10:47,752 DEBUG [M:0;jenkins-hbase4:42959] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/645f392e463849b48c1602792d5afcaf as hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/645f392e463849b48c1602792d5afcaf 2023-07-24 23:10:47,757 INFO [M:0;jenkins-hbase4:42959] regionserver.HStore(1080): Added hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/645f392e463849b48c1602792d5afcaf, entries=154, sequenceid=1152, filesize=27.1 K 2023-07-24 23:10:47,758 INFO [M:0;jenkins-hbase4:42959] regionserver.HRegion(2948): Finished flush of dataSize ~519.20 KB/531660, heapSize ~621.30 KB/636208, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 31ms, sequenceid=1152, compaction requested=false 2023-07-24 23:10:47,759 INFO [M:0;jenkins-hbase4:42959] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 23:10:47,759 DEBUG [M:0;jenkins-hbase4:42959] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 23:10:47,763 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 23:10:47,763 INFO [M:0;jenkins-hbase4:42959] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-24 23:10:47,764 INFO [M:0;jenkins-hbase4:42959] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:42959 2023-07-24 23:10:47,765 DEBUG [M:0;jenkins-hbase4:42959] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,42959,1690240218606 already deleted, retry=false 2023-07-24 23:10:47,817 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): regionserver:46215-0x1019999755d000b, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 23:10:47,817 INFO [RS:3;jenkins-hbase4:46215] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46215,1690240224735; zookeeper connection closed. 2023-07-24 23:10:47,817 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): regionserver:46215-0x1019999755d000b, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 23:10:47,817 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5e3ede48] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5e3ede48 2023-07-24 23:10:47,817 INFO [Listener at localhost/39785] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-24 23:10:47,917 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 23:10:47,917 INFO [M:0;jenkins-hbase4:42959] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,42959,1690240218606; zookeeper connection closed. 2023-07-24 23:10:47,917 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): master:42959-0x1019999755d0000, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 23:10:47,919 WARN [Listener at localhost/39785] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 23:10:47,923 INFO [Listener at localhost/39785] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 23:10:48,026 WARN [BP-1477399179-172.31.14.131-1690240214735 heartbeating to localhost/127.0.0.1:38733] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 23:10:48,027 WARN [BP-1477399179-172.31.14.131-1690240214735 heartbeating to localhost/127.0.0.1:38733] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1477399179-172.31.14.131-1690240214735 (Datanode Uuid 97cfdb7a-ec5a-4873-9369-1379102e7245) service to localhost/127.0.0.1:38733 2023-07-24 23:10:48,028 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/cluster_a5dd96c0-6e83-fd59-c008-51f91e0cf7a8/dfs/data/data5/current/BP-1477399179-172.31.14.131-1690240214735] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 23:10:48,028 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/cluster_a5dd96c0-6e83-fd59-c008-51f91e0cf7a8/dfs/data/data6/current/BP-1477399179-172.31.14.131-1690240214735] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 23:10:48,030 WARN [Listener at localhost/39785] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 23:10:48,033 INFO [Listener at localhost/39785] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 23:10:48,136 WARN [BP-1477399179-172.31.14.131-1690240214735 heartbeating to localhost/127.0.0.1:38733] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 23:10:48,136 WARN [BP-1477399179-172.31.14.131-1690240214735 heartbeating to localhost/127.0.0.1:38733] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1477399179-172.31.14.131-1690240214735 (Datanode Uuid 033ae720-e48d-4c5d-a692-6b037d8757b2) service to localhost/127.0.0.1:38733 2023-07-24 23:10:48,137 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/cluster_a5dd96c0-6e83-fd59-c008-51f91e0cf7a8/dfs/data/data3/current/BP-1477399179-172.31.14.131-1690240214735] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 23:10:48,137 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/cluster_a5dd96c0-6e83-fd59-c008-51f91e0cf7a8/dfs/data/data4/current/BP-1477399179-172.31.14.131-1690240214735] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 23:10:48,138 WARN [Listener at localhost/39785] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 23:10:48,141 INFO [Listener at localhost/39785] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 23:10:48,243 WARN [BP-1477399179-172.31.14.131-1690240214735 heartbeating to localhost/127.0.0.1:38733] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 23:10:48,244 WARN [BP-1477399179-172.31.14.131-1690240214735 heartbeating to localhost/127.0.0.1:38733] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1477399179-172.31.14.131-1690240214735 (Datanode Uuid de6bb3d1-2617-46d5-bec0-6ddb8e268b79) service to localhost/127.0.0.1:38733 2023-07-24 23:10:48,244 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/cluster_a5dd96c0-6e83-fd59-c008-51f91e0cf7a8/dfs/data/data1/current/BP-1477399179-172.31.14.131-1690240214735] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 23:10:48,244 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/cluster_a5dd96c0-6e83-fd59-c008-51f91e0cf7a8/dfs/data/data2/current/BP-1477399179-172.31.14.131-1690240214735] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 23:10:48,275 INFO [Listener at localhost/39785] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 23:10:48,401 INFO [Listener at localhost/39785] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-24 23:10:48,458 INFO [Listener at localhost/39785] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-24 23:10:48,458 INFO [Listener at localhost/39785] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-24 23:10:48,458 INFO [Listener at localhost/39785] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/hadoop.log.dir so I do NOT create it in target/test-data/6f39b251-506e-547e-6571-b32ebac4f970 2023-07-24 23:10:48,458 INFO [Listener at localhost/39785] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a7663dc4-29f4-e0fd-207e-efe738df20c6/hadoop.tmp.dir so I do NOT create it in target/test-data/6f39b251-506e-547e-6571-b32ebac4f970 2023-07-24 23:10:48,458 INFO [Listener at localhost/39785] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39b251-506e-547e-6571-b32ebac4f970/cluster_f9d48cc7-c0cb-4472-8ae4-bc4e1459bdbf, deleteOnExit=true 2023-07-24 23:10:48,459 INFO [Listener at localhost/39785] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-24 23:10:48,459 INFO [Listener at localhost/39785] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39b251-506e-547e-6571-b32ebac4f970/test.cache.data in system properties and HBase conf 2023-07-24 23:10:48,459 INFO [Listener at localhost/39785] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39b251-506e-547e-6571-b32ebac4f970/hadoop.tmp.dir in system properties and HBase conf 2023-07-24 23:10:48,459 INFO [Listener at localhost/39785] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39b251-506e-547e-6571-b32ebac4f970/hadoop.log.dir in system properties and HBase conf 2023-07-24 23:10:48,459 INFO [Listener at localhost/39785] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39b251-506e-547e-6571-b32ebac4f970/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-24 23:10:48,459 INFO [Listener at localhost/39785] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39b251-506e-547e-6571-b32ebac4f970/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-24 23:10:48,459 INFO [Listener at localhost/39785] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-24 23:10:48,459 DEBUG [Listener at localhost/39785] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-24 23:10:48,460 INFO [Listener at localhost/39785] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39b251-506e-547e-6571-b32ebac4f970/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-24 23:10:48,460 INFO [Listener at localhost/39785] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39b251-506e-547e-6571-b32ebac4f970/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-24 23:10:48,460 INFO [Listener at localhost/39785] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39b251-506e-547e-6571-b32ebac4f970/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-24 23:10:48,460 INFO [Listener at localhost/39785] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39b251-506e-547e-6571-b32ebac4f970/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-24 23:10:48,460 INFO [Listener at localhost/39785] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39b251-506e-547e-6571-b32ebac4f970/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-24 23:10:48,460 INFO [Listener at localhost/39785] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39b251-506e-547e-6571-b32ebac4f970/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-24 23:10:48,460 INFO [Listener at localhost/39785] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39b251-506e-547e-6571-b32ebac4f970/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-24 23:10:48,460 INFO [Listener at localhost/39785] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39b251-506e-547e-6571-b32ebac4f970/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-24 23:10:48,460 INFO [Listener at localhost/39785] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39b251-506e-547e-6571-b32ebac4f970/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-24 23:10:48,461 INFO [Listener at localhost/39785] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39b251-506e-547e-6571-b32ebac4f970/nfs.dump.dir in system properties and HBase conf 2023-07-24 23:10:48,461 INFO [Listener at localhost/39785] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39b251-506e-547e-6571-b32ebac4f970/java.io.tmpdir in system properties and HBase conf 2023-07-24 23:10:48,461 INFO [Listener at localhost/39785] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39b251-506e-547e-6571-b32ebac4f970/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-24 23:10:48,461 INFO [Listener at localhost/39785] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39b251-506e-547e-6571-b32ebac4f970/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-24 23:10:48,461 INFO [Listener at localhost/39785] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39b251-506e-547e-6571-b32ebac4f970/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-24 23:10:48,466 WARN [Listener at localhost/39785] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-24 23:10:48,466 WARN [Listener at localhost/39785] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-24 23:10:48,497 DEBUG [Listener at localhost/39785-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x1019999755d000a, quorum=127.0.0.1:59310, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-24 23:10:48,497 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x1019999755d000a, quorum=127.0.0.1:59310, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-24 23:10:48,522 WARN [Listener at localhost/39785] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 23:10:48,525 INFO [Listener at localhost/39785] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 23:10:48,529 INFO [Listener at localhost/39785] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39b251-506e-547e-6571-b32ebac4f970/java.io.tmpdir/Jetty_localhost_35153_hdfs____.4j67hl/webapp 2023-07-24 23:10:48,624 INFO [Listener at localhost/39785] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35153 2023-07-24 23:10:48,628 WARN [Listener at localhost/39785] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-24 23:10:48,629 WARN [Listener at localhost/39785] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-24 23:10:48,678 WARN [Listener at localhost/36591] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 23:10:48,693 WARN [Listener at localhost/36591] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 23:10:48,695 WARN [Listener at localhost/36591] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 23:10:48,696 INFO [Listener at localhost/36591] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 23:10:48,703 INFO [Listener at localhost/36591] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39b251-506e-547e-6571-b32ebac4f970/java.io.tmpdir/Jetty_localhost_46365_datanode____xvnm8n/webapp 2023-07-24 23:10:48,799 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 23:10:48,801 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-24 23:10:48,801 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-24 23:10:48,803 INFO [Listener at localhost/36591] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46365 2023-07-24 23:10:48,815 WARN [Listener at localhost/35961] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 23:10:48,843 WARN [Listener at localhost/35961] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 23:10:48,846 WARN [Listener at localhost/35961] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 23:10:48,847 INFO [Listener at localhost/35961] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 23:10:48,853 INFO [Listener at localhost/35961] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39b251-506e-547e-6571-b32ebac4f970/java.io.tmpdir/Jetty_localhost_42587_datanode____k24qe5/webapp 2023-07-24 23:10:48,940 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5f2727d95375cbc8: Processing first storage report for DS-cd008b3b-2212-4b6b-bb22-79eb4b6fc3f7 from datanode 8e9a8d57-1f71-4441-963b-365889bcbd68 2023-07-24 23:10:48,940 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5f2727d95375cbc8: from storage DS-cd008b3b-2212-4b6b-bb22-79eb4b6fc3f7 node DatanodeRegistration(127.0.0.1:43569, datanodeUuid=8e9a8d57-1f71-4441-963b-365889bcbd68, infoPort=38793, infoSecurePort=0, ipcPort=35961, storageInfo=lv=-57;cid=testClusterID;nsid=611002712;c=1690240248469), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 23:10:48,940 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5f2727d95375cbc8: Processing first storage report for DS-f1fc145b-7d8c-4d3e-a78e-211ca7eca599 from datanode 8e9a8d57-1f71-4441-963b-365889bcbd68 2023-07-24 23:10:48,940 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5f2727d95375cbc8: from storage DS-f1fc145b-7d8c-4d3e-a78e-211ca7eca599 node DatanodeRegistration(127.0.0.1:43569, datanodeUuid=8e9a8d57-1f71-4441-963b-365889bcbd68, infoPort=38793, infoSecurePort=0, ipcPort=35961, storageInfo=lv=-57;cid=testClusterID;nsid=611002712;c=1690240248469), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 23:10:48,961 INFO [Listener at localhost/35961] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42587 2023-07-24 23:10:48,968 WARN [Listener at localhost/46765] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 23:10:48,991 WARN [Listener at localhost/46765] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 23:10:48,994 WARN [Listener at localhost/46765] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 23:10:48,995 INFO [Listener at localhost/46765] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 23:10:49,002 INFO [Listener at localhost/46765] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39b251-506e-547e-6571-b32ebac4f970/java.io.tmpdir/Jetty_localhost_41413_datanode____ah4u8w/webapp 2023-07-24 23:10:49,099 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x886b32b8cb29c29b: Processing first storage report for DS-321210a8-f68a-41e1-91e5-a565240f06a8 from datanode a44d6937-70b7-4f83-aa4e-bf08859393c6 2023-07-24 23:10:49,099 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x886b32b8cb29c29b: from storage DS-321210a8-f68a-41e1-91e5-a565240f06a8 node DatanodeRegistration(127.0.0.1:37755, datanodeUuid=a44d6937-70b7-4f83-aa4e-bf08859393c6, infoPort=33229, infoSecurePort=0, ipcPort=46765, storageInfo=lv=-57;cid=testClusterID;nsid=611002712;c=1690240248469), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 23:10:49,099 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x886b32b8cb29c29b: Processing first storage report for DS-c3bdf4aa-3642-43c2-bd4a-6aef9d33ef0e from datanode a44d6937-70b7-4f83-aa4e-bf08859393c6 2023-07-24 23:10:49,100 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x886b32b8cb29c29b: from storage DS-c3bdf4aa-3642-43c2-bd4a-6aef9d33ef0e node DatanodeRegistration(127.0.0.1:37755, datanodeUuid=a44d6937-70b7-4f83-aa4e-bf08859393c6, infoPort=33229, infoSecurePort=0, ipcPort=46765, storageInfo=lv=-57;cid=testClusterID;nsid=611002712;c=1690240248469), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 23:10:49,126 INFO [Listener at localhost/46765] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41413 2023-07-24 23:10:49,134 WARN [Listener at localhost/36721] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 23:10:49,244 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x76628fdec1521fbb: Processing first storage report for DS-d7b00978-f4fd-44e2-925f-b53f24466aae from datanode 6cd63ec3-8275-4db2-98e7-52b473c1d976 2023-07-24 23:10:49,244 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x76628fdec1521fbb: from storage DS-d7b00978-f4fd-44e2-925f-b53f24466aae node DatanodeRegistration(127.0.0.1:35397, datanodeUuid=6cd63ec3-8275-4db2-98e7-52b473c1d976, infoPort=44663, infoSecurePort=0, ipcPort=36721, storageInfo=lv=-57;cid=testClusterID;nsid=611002712;c=1690240248469), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 23:10:49,244 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x76628fdec1521fbb: Processing first storage report for DS-9a48f187-7da1-4769-9fcf-4a3c15566db0 from datanode 6cd63ec3-8275-4db2-98e7-52b473c1d976 2023-07-24 23:10:49,244 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x76628fdec1521fbb: from storage DS-9a48f187-7da1-4769-9fcf-4a3c15566db0 node DatanodeRegistration(127.0.0.1:35397, datanodeUuid=6cd63ec3-8275-4db2-98e7-52b473c1d976, infoPort=44663, infoSecurePort=0, ipcPort=36721, storageInfo=lv=-57;cid=testClusterID;nsid=611002712;c=1690240248469), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 23:10:49,246 DEBUG [Listener at localhost/36721] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39b251-506e-547e-6571-b32ebac4f970 2023-07-24 23:10:49,253 INFO [Listener at localhost/36721] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39b251-506e-547e-6571-b32ebac4f970/cluster_f9d48cc7-c0cb-4472-8ae4-bc4e1459bdbf/zookeeper_0, clientPort=56120, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39b251-506e-547e-6571-b32ebac4f970/cluster_f9d48cc7-c0cb-4472-8ae4-bc4e1459bdbf/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39b251-506e-547e-6571-b32ebac4f970/cluster_f9d48cc7-c0cb-4472-8ae4-bc4e1459bdbf/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-24 23:10:49,255 INFO [Listener at localhost/36721] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=56120 2023-07-24 23:10:49,256 INFO [Listener at localhost/36721] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 23:10:49,257 INFO [Listener at localhost/36721] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 23:10:49,285 INFO [Listener at localhost/36721] util.FSUtils(471): Created version file at hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1 with version=8 2023-07-24 23:10:49,285 INFO [Listener at localhost/36721] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/hbase-staging 2023-07-24 23:10:49,287 DEBUG [Listener at localhost/36721] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-24 23:10:49,287 DEBUG [Listener at localhost/36721] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-24 23:10:49,287 DEBUG [Listener at localhost/36721] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-24 23:10:49,287 DEBUG [Listener at localhost/36721] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-24 23:10:49,288 INFO [Listener at localhost/36721] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 23:10:49,289 INFO [Listener at localhost/36721] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 23:10:49,289 INFO [Listener at localhost/36721] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 23:10:49,289 INFO [Listener at localhost/36721] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 23:10:49,289 INFO [Listener at localhost/36721] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 23:10:49,289 INFO [Listener at localhost/36721] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 23:10:49,289 INFO [Listener at localhost/36721] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 23:10:49,291 INFO [Listener at localhost/36721] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35803 2023-07-24 23:10:49,291 INFO [Listener at localhost/36721] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 23:10:49,293 INFO [Listener at localhost/36721] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 23:10:49,294 INFO [Listener at localhost/36721] zookeeper.RecoverableZooKeeper(93): Process identifier=master:35803 connecting to ZooKeeper ensemble=127.0.0.1:56120 2023-07-24 23:10:49,304 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): master:358030x0, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 23:10:49,305 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:35803-0x1019999f0a60000 connected 2023-07-24 23:10:49,322 DEBUG [Listener at localhost/36721] zookeeper.ZKUtil(164): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 23:10:49,323 DEBUG [Listener at localhost/36721] zookeeper.ZKUtil(164): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 23:10:49,323 DEBUG [Listener at localhost/36721] zookeeper.ZKUtil(164): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 23:10:49,324 DEBUG [Listener at localhost/36721] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35803 2023-07-24 23:10:49,324 DEBUG [Listener at localhost/36721] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35803 2023-07-24 23:10:49,324 DEBUG [Listener at localhost/36721] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35803 2023-07-24 23:10:49,325 DEBUG [Listener at localhost/36721] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35803 2023-07-24 23:10:49,325 DEBUG [Listener at localhost/36721] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35803 2023-07-24 23:10:49,327 INFO [Listener at localhost/36721] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 23:10:49,327 INFO [Listener at localhost/36721] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 23:10:49,327 INFO [Listener at localhost/36721] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 23:10:49,328 INFO [Listener at localhost/36721] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-24 23:10:49,328 INFO [Listener at localhost/36721] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 23:10:49,328 INFO [Listener at localhost/36721] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 23:10:49,328 INFO [Listener at localhost/36721] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 23:10:49,329 INFO [Listener at localhost/36721] http.HttpServer(1146): Jetty bound to port 33275 2023-07-24 23:10:49,329 INFO [Listener at localhost/36721] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 23:10:49,330 INFO [Listener at localhost/36721] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 23:10:49,331 INFO [Listener at localhost/36721] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3f8e7458{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39b251-506e-547e-6571-b32ebac4f970/hadoop.log.dir/,AVAILABLE} 2023-07-24 23:10:49,331 INFO [Listener at localhost/36721] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 23:10:49,331 INFO [Listener at localhost/36721] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@ed4524b{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 23:10:49,453 INFO [Listener at localhost/36721] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 23:10:49,454 INFO [Listener at localhost/36721] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 23:10:49,454 INFO [Listener at localhost/36721] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 23:10:49,454 INFO [Listener at localhost/36721] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 23:10:49,455 INFO [Listener at localhost/36721] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 23:10:49,457 INFO [Listener at localhost/36721] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3e0cdcc7{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39b251-506e-547e-6571-b32ebac4f970/java.io.tmpdir/jetty-0_0_0_0-33275-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4024993610917341486/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-24 23:10:49,459 INFO [Listener at localhost/36721] server.AbstractConnector(333): Started ServerConnector@285e971d{HTTP/1.1, (http/1.1)}{0.0.0.0:33275} 2023-07-24 23:10:49,459 INFO [Listener at localhost/36721] server.Server(415): Started @36762ms 2023-07-24 23:10:49,459 INFO [Listener at localhost/36721] master.HMaster(444): hbase.rootdir=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1, hbase.cluster.distributed=false 2023-07-24 23:10:49,479 INFO [Listener at localhost/36721] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 23:10:49,479 INFO [Listener at localhost/36721] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 23:10:49,479 INFO [Listener at localhost/36721] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 23:10:49,479 INFO [Listener at localhost/36721] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 23:10:49,479 INFO [Listener at localhost/36721] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 23:10:49,479 INFO [Listener at localhost/36721] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 23:10:49,479 INFO [Listener at localhost/36721] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 23:10:49,480 INFO [Listener at localhost/36721] ipc.NettyRpcServer(120): Bind to /172.31.14.131:45635 2023-07-24 23:10:49,481 INFO [Listener at localhost/36721] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 23:10:49,482 DEBUG [Listener at localhost/36721] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 23:10:49,482 INFO [Listener at localhost/36721] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 23:10:49,484 INFO [Listener at localhost/36721] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 23:10:49,485 INFO [Listener at localhost/36721] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:45635 connecting to ZooKeeper ensemble=127.0.0.1:56120 2023-07-24 23:10:49,489 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): regionserver:456350x0, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 23:10:49,490 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:45635-0x1019999f0a60001 connected 2023-07-24 23:10:49,490 DEBUG [Listener at localhost/36721] zookeeper.ZKUtil(164): regionserver:45635-0x1019999f0a60001, quorum=127.0.0.1:56120, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 23:10:49,491 DEBUG [Listener at localhost/36721] zookeeper.ZKUtil(164): regionserver:45635-0x1019999f0a60001, quorum=127.0.0.1:56120, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 23:10:49,491 DEBUG [Listener at localhost/36721] zookeeper.ZKUtil(164): regionserver:45635-0x1019999f0a60001, quorum=127.0.0.1:56120, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 23:10:49,492 DEBUG [Listener at localhost/36721] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45635 2023-07-24 23:10:49,492 DEBUG [Listener at localhost/36721] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45635 2023-07-24 23:10:49,492 DEBUG [Listener at localhost/36721] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45635 2023-07-24 23:10:49,493 DEBUG [Listener at localhost/36721] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45635 2023-07-24 23:10:49,493 DEBUG [Listener at localhost/36721] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45635 2023-07-24 23:10:49,495 INFO [Listener at localhost/36721] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 23:10:49,495 INFO [Listener at localhost/36721] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 23:10:49,495 INFO [Listener at localhost/36721] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 23:10:49,496 INFO [Listener at localhost/36721] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 23:10:49,496 INFO [Listener at localhost/36721] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 23:10:49,496 INFO [Listener at localhost/36721] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 23:10:49,496 INFO [Listener at localhost/36721] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 23:10:49,497 INFO [Listener at localhost/36721] http.HttpServer(1146): Jetty bound to port 40983 2023-07-24 23:10:49,498 INFO [Listener at localhost/36721] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 23:10:49,503 INFO [Listener at localhost/36721] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 23:10:49,503 INFO [Listener at localhost/36721] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6a10df09{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39b251-506e-547e-6571-b32ebac4f970/hadoop.log.dir/,AVAILABLE} 2023-07-24 23:10:49,504 INFO [Listener at localhost/36721] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 23:10:49,504 INFO [Listener at localhost/36721] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@f6f98fa{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 23:10:49,641 INFO [Listener at localhost/36721] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 23:10:49,641 INFO [Listener at localhost/36721] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 23:10:49,642 INFO [Listener at localhost/36721] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 23:10:49,642 INFO [Listener at localhost/36721] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 23:10:49,643 INFO [Listener at localhost/36721] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 23:10:49,644 INFO [Listener at localhost/36721] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2b5fd2c9{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39b251-506e-547e-6571-b32ebac4f970/java.io.tmpdir/jetty-0_0_0_0-40983-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7400807728000375258/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 23:10:49,645 INFO [Listener at localhost/36721] server.AbstractConnector(333): Started ServerConnector@82a3fa8{HTTP/1.1, (http/1.1)}{0.0.0.0:40983} 2023-07-24 23:10:49,646 INFO [Listener at localhost/36721] server.Server(415): Started @36949ms 2023-07-24 23:10:49,660 INFO [Listener at localhost/36721] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 23:10:49,660 INFO [Listener at localhost/36721] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 23:10:49,660 INFO [Listener at localhost/36721] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 23:10:49,660 INFO [Listener at localhost/36721] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 23:10:49,660 INFO [Listener at localhost/36721] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 23:10:49,661 INFO [Listener at localhost/36721] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 23:10:49,661 INFO [Listener at localhost/36721] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 23:10:49,661 INFO [Listener at localhost/36721] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44481 2023-07-24 23:10:49,662 INFO [Listener at localhost/36721] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 23:10:49,666 DEBUG [Listener at localhost/36721] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 23:10:49,667 INFO [Listener at localhost/36721] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 23:10:49,668 INFO [Listener at localhost/36721] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 23:10:49,669 INFO [Listener at localhost/36721] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44481 connecting to ZooKeeper ensemble=127.0.0.1:56120 2023-07-24 23:10:49,674 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): regionserver:444810x0, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 23:10:49,676 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44481-0x1019999f0a60002 connected 2023-07-24 23:10:49,676 DEBUG [Listener at localhost/36721] zookeeper.ZKUtil(164): regionserver:44481-0x1019999f0a60002, quorum=127.0.0.1:56120, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 23:10:49,676 DEBUG [Listener at localhost/36721] zookeeper.ZKUtil(164): regionserver:44481-0x1019999f0a60002, quorum=127.0.0.1:56120, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 23:10:49,676 DEBUG [Listener at localhost/36721] zookeeper.ZKUtil(164): regionserver:44481-0x1019999f0a60002, quorum=127.0.0.1:56120, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 23:10:49,677 DEBUG [Listener at localhost/36721] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44481 2023-07-24 23:10:49,677 DEBUG [Listener at localhost/36721] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44481 2023-07-24 23:10:49,677 DEBUG [Listener at localhost/36721] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44481 2023-07-24 23:10:49,678 DEBUG [Listener at localhost/36721] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44481 2023-07-24 23:10:49,678 DEBUG [Listener at localhost/36721] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44481 2023-07-24 23:10:49,680 INFO [Listener at localhost/36721] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 23:10:49,680 INFO [Listener at localhost/36721] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 23:10:49,680 INFO [Listener at localhost/36721] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 23:10:49,681 INFO [Listener at localhost/36721] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 23:10:49,681 INFO [Listener at localhost/36721] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 23:10:49,681 INFO [Listener at localhost/36721] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 23:10:49,681 INFO [Listener at localhost/36721] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 23:10:49,682 INFO [Listener at localhost/36721] http.HttpServer(1146): Jetty bound to port 36349 2023-07-24 23:10:49,682 INFO [Listener at localhost/36721] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 23:10:49,685 INFO [Listener at localhost/36721] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 23:10:49,685 INFO [Listener at localhost/36721] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7843c978{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39b251-506e-547e-6571-b32ebac4f970/hadoop.log.dir/,AVAILABLE} 2023-07-24 23:10:49,685 INFO [Listener at localhost/36721] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 23:10:49,685 INFO [Listener at localhost/36721] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@40e93b12{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 23:10:49,819 INFO [Listener at localhost/36721] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 23:10:49,821 INFO [Listener at localhost/36721] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 23:10:49,821 INFO [Listener at localhost/36721] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 23:10:49,821 INFO [Listener at localhost/36721] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-24 23:10:49,824 INFO [Listener at localhost/36721] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 23:10:49,825 INFO [Listener at localhost/36721] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@5b3993d{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39b251-506e-547e-6571-b32ebac4f970/java.io.tmpdir/jetty-0_0_0_0-36349-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8529975857021416222/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 23:10:49,827 INFO [Listener at localhost/36721] server.AbstractConnector(333): Started ServerConnector@45dcad8a{HTTP/1.1, (http/1.1)}{0.0.0.0:36349} 2023-07-24 23:10:49,827 INFO [Listener at localhost/36721] server.Server(415): Started @37131ms 2023-07-24 23:10:49,841 INFO [Listener at localhost/36721] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 23:10:49,841 INFO [Listener at localhost/36721] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 23:10:49,841 INFO [Listener at localhost/36721] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 23:10:49,841 INFO [Listener at localhost/36721] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 23:10:49,841 INFO [Listener at localhost/36721] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 23:10:49,842 INFO [Listener at localhost/36721] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 23:10:49,842 INFO [Listener at localhost/36721] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 23:10:49,842 INFO [Listener at localhost/36721] ipc.NettyRpcServer(120): Bind to /172.31.14.131:39913 2023-07-24 23:10:49,843 INFO [Listener at localhost/36721] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 23:10:49,845 DEBUG [Listener at localhost/36721] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 23:10:49,845 INFO [Listener at localhost/36721] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 23:10:49,847 INFO [Listener at localhost/36721] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 23:10:49,848 INFO [Listener at localhost/36721] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:39913 connecting to ZooKeeper ensemble=127.0.0.1:56120 2023-07-24 23:10:49,851 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): regionserver:399130x0, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 23:10:49,853 DEBUG [Listener at localhost/36721] zookeeper.ZKUtil(164): regionserver:399130x0, quorum=127.0.0.1:56120, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 23:10:49,853 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:39913-0x1019999f0a60003 connected 2023-07-24 23:10:49,853 DEBUG [Listener at localhost/36721] zookeeper.ZKUtil(164): regionserver:39913-0x1019999f0a60003, quorum=127.0.0.1:56120, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 23:10:49,854 DEBUG [Listener at localhost/36721] zookeeper.ZKUtil(164): regionserver:39913-0x1019999f0a60003, quorum=127.0.0.1:56120, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 23:10:49,854 DEBUG [Listener at localhost/36721] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39913 2023-07-24 23:10:49,856 DEBUG [Listener at localhost/36721] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39913 2023-07-24 23:10:49,858 DEBUG [Listener at localhost/36721] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39913 2023-07-24 23:10:49,861 DEBUG [Listener at localhost/36721] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39913 2023-07-24 23:10:49,862 DEBUG [Listener at localhost/36721] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39913 2023-07-24 23:10:49,864 INFO [Listener at localhost/36721] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 23:10:49,864 INFO [Listener at localhost/36721] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 23:10:49,864 INFO [Listener at localhost/36721] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 23:10:49,865 INFO [Listener at localhost/36721] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 23:10:49,866 INFO [Listener at localhost/36721] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 23:10:49,866 INFO [Listener at localhost/36721] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 23:10:49,866 INFO [Listener at localhost/36721] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 23:10:49,867 INFO [Listener at localhost/36721] http.HttpServer(1146): Jetty bound to port 36829 2023-07-24 23:10:49,867 INFO [Listener at localhost/36721] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 23:10:49,870 INFO [Listener at localhost/36721] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 23:10:49,870 INFO [Listener at localhost/36721] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@234a8caa{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39b251-506e-547e-6571-b32ebac4f970/hadoop.log.dir/,AVAILABLE} 2023-07-24 23:10:49,870 INFO [Listener at localhost/36721] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 23:10:49,871 INFO [Listener at localhost/36721] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5fd1460f{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 23:10:49,999 INFO [Listener at localhost/36721] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 23:10:50,000 INFO [Listener at localhost/36721] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 23:10:50,000 INFO [Listener at localhost/36721] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 23:10:50,000 INFO [Listener at localhost/36721] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 23:10:50,003 INFO [Listener at localhost/36721] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 23:10:50,004 INFO [Listener at localhost/36721] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@5e2b38d{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39b251-506e-547e-6571-b32ebac4f970/java.io.tmpdir/jetty-0_0_0_0-36829-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5583861937798527636/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 23:10:50,005 INFO [Listener at localhost/36721] server.AbstractConnector(333): Started ServerConnector@50dbd304{HTTP/1.1, (http/1.1)}{0.0.0.0:36829} 2023-07-24 23:10:50,005 INFO [Listener at localhost/36721] server.Server(415): Started @37309ms 2023-07-24 23:10:50,011 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 23:10:50,015 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@4f1473b7{HTTP/1.1, (http/1.1)}{0.0.0.0:36127} 2023-07-24 23:10:50,015 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @37319ms 2023-07-24 23:10:50,015 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,35803,1690240249288 2023-07-24 23:10:50,016 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-24 23:10:50,017 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,35803,1690240249288 2023-07-24 23:10:50,018 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): regionserver:45635-0x1019999f0a60001, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 23:10:50,018 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): regionserver:39913-0x1019999f0a60003, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 23:10:50,019 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): regionserver:44481-0x1019999f0a60002, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 23:10:50,018 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 23:10:50,020 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 23:10:50,021 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 23:10:50,022 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,35803,1690240249288 from backup master directory 2023-07-24 23:10:50,023 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 23:10:50,025 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,35803,1690240249288 2023-07-24 23:10:50,025 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-24 23:10:50,025 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 23:10:50,025 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,35803,1690240249288 2023-07-24 23:10:50,064 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/hbase.id with ID: ebfcd1b9-27e5-44a3-b24b-1d1f09f6ecda 2023-07-24 23:10:50,077 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 23:10:50,080 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 23:10:50,092 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x0de4f153 to 127.0.0.1:56120 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 23:10:50,097 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4c6d7b9b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 23:10:50,097 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 23:10:50,097 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-24 23:10:50,098 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 23:10:50,099 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/MasterData/data/master/store-tmp 2023-07-24 23:10:50,112 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-24 23:10:50,115 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:50,115 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-24 23:10:50,115 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 23:10:50,115 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 23:10:50,115 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-24 23:10:50,115 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 23:10:50,115 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 23:10:50,115 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 23:10:50,117 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/MasterData/WALs/jenkins-hbase4.apache.org,35803,1690240249288 2023-07-24 23:10:50,123 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35803%2C1690240249288, suffix=, logDir=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/MasterData/WALs/jenkins-hbase4.apache.org,35803,1690240249288, archiveDir=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/MasterData/oldWALs, maxLogs=10 2023-07-24 23:10:50,150 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35397,DS-d7b00978-f4fd-44e2-925f-b53f24466aae,DISK] 2023-07-24 23:10:50,161 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43569,DS-cd008b3b-2212-4b6b-bb22-79eb4b6fc3f7,DISK] 2023-07-24 23:10:50,162 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37755,DS-321210a8-f68a-41e1-91e5-a565240f06a8,DISK] 2023-07-24 23:10:50,171 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/MasterData/WALs/jenkins-hbase4.apache.org,35803,1690240249288/jenkins-hbase4.apache.org%2C35803%2C1690240249288.1690240250124 2023-07-24 23:10:50,172 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35397,DS-d7b00978-f4fd-44e2-925f-b53f24466aae,DISK], DatanodeInfoWithStorage[127.0.0.1:37755,DS-321210a8-f68a-41e1-91e5-a565240f06a8,DISK], DatanodeInfoWithStorage[127.0.0.1:43569,DS-cd008b3b-2212-4b6b-bb22-79eb4b6fc3f7,DISK]] 2023-07-24 23:10:50,172 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-24 23:10:50,172 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:50,172 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 23:10:50,172 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 23:10:50,175 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-24 23:10:50,178 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-24 23:10:50,179 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-24 23:10:50,180 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:50,181 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-24 23:10:50,182 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-24 23:10:50,186 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 23:10:50,198 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 23:10:50,199 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11617408480, jitterRate=0.08195547759532928}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 23:10:50,199 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 23:10:50,199 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-24 23:10:50,201 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-24 23:10:50,201 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-24 23:10:50,201 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-24 23:10:50,202 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-24 23:10:50,202 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-24 23:10:50,202 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-24 23:10:50,203 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-24 23:10:50,204 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-24 23:10:50,205 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-24 23:10:50,205 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-24 23:10:50,206 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-24 23:10:50,207 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 23:10:50,208 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-24 23:10:50,208 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-24 23:10:50,209 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-24 23:10:50,211 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): regionserver:44481-0x1019999f0a60002, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 23:10:50,211 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): regionserver:39913-0x1019999f0a60003, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 23:10:50,211 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): regionserver:45635-0x1019999f0a60001, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 23:10:50,211 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 23:10:50,211 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 23:10:50,212 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,35803,1690240249288, sessionid=0x1019999f0a60000, setting cluster-up flag (Was=false) 2023-07-24 23:10:50,217 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 23:10:50,222 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-24 23:10:50,223 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,35803,1690240249288 2023-07-24 23:10:50,226 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 23:10:50,230 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-24 23:10:50,231 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,35803,1690240249288 2023-07-24 23:10:50,232 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/.hbase-snapshot/.tmp 2023-07-24 23:10:50,234 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-24 23:10:50,234 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-24 23:10:50,234 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-24 23:10:50,234 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-24 23:10:50,235 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-24 23:10:50,236 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-24 23:10:50,240 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35803,1690240249288] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 23:10:50,251 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-24 23:10:50,251 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-24 23:10:50,251 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-24 23:10:50,251 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-24 23:10:50,251 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 23:10:50,251 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 23:10:50,252 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 23:10:50,252 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 23:10:50,252 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-24 23:10:50,252 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:50,252 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 23:10:50,252 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:50,253 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1690240280253 2023-07-24 23:10:50,254 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-24 23:10:50,254 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-24 23:10:50,254 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-24 23:10:50,254 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-24 23:10:50,254 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-24 23:10:50,254 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-24 23:10:50,254 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:50,255 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-24 23:10:50,255 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-24 23:10:50,255 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-24 23:10:50,255 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-24 23:10:50,256 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-24 23:10:50,256 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-24 23:10:50,257 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-24 23:10:50,257 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-24 23:10:50,257 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690240250257,5,FailOnTimeoutGroup] 2023-07-24 23:10:50,257 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690240250257,5,FailOnTimeoutGroup] 2023-07-24 23:10:50,257 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:50,257 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-24 23:10:50,257 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:50,257 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:50,280 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-24 23:10:50,281 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-24 23:10:50,281 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1 2023-07-24 23:10:50,296 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:50,297 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-24 23:10:50,298 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/meta/1588230740/info 2023-07-24 23:10:50,299 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-24 23:10:50,300 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:50,300 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-24 23:10:50,301 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/meta/1588230740/rep_barrier 2023-07-24 23:10:50,302 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-24 23:10:50,302 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:50,302 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-24 23:10:50,304 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/meta/1588230740/table 2023-07-24 23:10:50,304 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-24 23:10:50,305 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:50,306 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/meta/1588230740 2023-07-24 23:10:50,306 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/meta/1588230740 2023-07-24 23:10:50,309 INFO [RS:0;jenkins-hbase4:45635] regionserver.HRegionServer(951): ClusterId : ebfcd1b9-27e5-44a3-b24b-1d1f09f6ecda 2023-07-24 23:10:50,312 DEBUG [RS:0;jenkins-hbase4:45635] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 23:10:50,313 INFO [RS:2;jenkins-hbase4:39913] regionserver.HRegionServer(951): ClusterId : ebfcd1b9-27e5-44a3-b24b-1d1f09f6ecda 2023-07-24 23:10:50,312 INFO [RS:1;jenkins-hbase4:44481] regionserver.HRegionServer(951): ClusterId : ebfcd1b9-27e5-44a3-b24b-1d1f09f6ecda 2023-07-24 23:10:50,313 DEBUG [RS:2;jenkins-hbase4:39913] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 23:10:50,313 DEBUG [RS:1;jenkins-hbase4:44481] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 23:10:50,314 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-24 23:10:50,316 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-24 23:10:50,318 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 23:10:50,319 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9886104000, jitterRate=-0.07928481698036194}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-24 23:10:50,319 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-24 23:10:50,319 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-24 23:10:50,319 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-24 23:10:50,319 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-24 23:10:50,319 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-24 23:10:50,319 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-24 23:10:50,320 DEBUG [RS:0;jenkins-hbase4:45635] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 23:10:50,320 DEBUG [RS:0;jenkins-hbase4:45635] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 23:10:50,321 DEBUG [RS:1;jenkins-hbase4:44481] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 23:10:50,321 DEBUG [RS:2;jenkins-hbase4:39913] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 23:10:50,321 DEBUG [RS:2;jenkins-hbase4:39913] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 23:10:50,321 DEBUG [RS:1;jenkins-hbase4:44481] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 23:10:50,322 DEBUG [RS:0;jenkins-hbase4:45635] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 23:10:50,323 DEBUG [RS:2;jenkins-hbase4:39913] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 23:10:50,323 DEBUG [RS:1;jenkins-hbase4:44481] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 23:10:50,329 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-24 23:10:50,329 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-24 23:10:50,329 DEBUG [RS:2;jenkins-hbase4:39913] zookeeper.ReadOnlyZKClient(139): Connect 0x00baf12b to 127.0.0.1:56120 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 23:10:50,329 DEBUG [RS:0;jenkins-hbase4:45635] zookeeper.ReadOnlyZKClient(139): Connect 0x7a4dd8c4 to 127.0.0.1:56120 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 23:10:50,329 DEBUG [RS:1;jenkins-hbase4:44481] zookeeper.ReadOnlyZKClient(139): Connect 0x6070f617 to 127.0.0.1:56120 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 23:10:50,336 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-24 23:10:50,337 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-24 23:10:50,338 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-24 23:10:50,340 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-24 23:10:50,342 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-24 23:10:50,343 DEBUG [RS:0;jenkins-hbase4:45635] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@748a39fd, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 23:10:50,343 DEBUG [RS:1;jenkins-hbase4:44481] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6eb71469, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 23:10:50,344 DEBUG [RS:0;jenkins-hbase4:45635] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@35b733c6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 23:10:50,344 DEBUG [RS:1;jenkins-hbase4:44481] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2f8a1e52, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 23:10:50,345 DEBUG [RS:2;jenkins-hbase4:39913] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@e7d1f4c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 23:10:50,345 DEBUG [RS:2;jenkins-hbase4:39913] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@a429a5b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 23:10:50,356 DEBUG [RS:2;jenkins-hbase4:39913] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:39913 2023-07-24 23:10:50,356 INFO [RS:2;jenkins-hbase4:39913] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 23:10:50,356 INFO [RS:2;jenkins-hbase4:39913] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 23:10:50,356 DEBUG [RS:2;jenkins-hbase4:39913] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 23:10:50,356 INFO [RS:2;jenkins-hbase4:39913] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,35803,1690240249288 with isa=jenkins-hbase4.apache.org/172.31.14.131:39913, startcode=1690240249840 2023-07-24 23:10:50,356 DEBUG [RS:2;jenkins-hbase4:39913] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 23:10:50,358 DEBUG [RS:1;jenkins-hbase4:44481] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:44481 2023-07-24 23:10:50,358 DEBUG [RS:0;jenkins-hbase4:45635] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:45635 2023-07-24 23:10:50,358 INFO [RS:1;jenkins-hbase4:44481] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 23:10:50,358 INFO [RS:1;jenkins-hbase4:44481] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 23:10:50,358 INFO [RS:0;jenkins-hbase4:45635] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 23:10:50,358 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56179, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 23:10:50,358 INFO [RS:0;jenkins-hbase4:45635] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 23:10:50,359 DEBUG [RS:0;jenkins-hbase4:45635] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 23:10:50,358 DEBUG [RS:1;jenkins-hbase4:44481] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 23:10:50,359 INFO [RS:0;jenkins-hbase4:45635] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,35803,1690240249288 with isa=jenkins-hbase4.apache.org/172.31.14.131:45635, startcode=1690240249478 2023-07-24 23:10:50,359 INFO [RS:1;jenkins-hbase4:44481] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,35803,1690240249288 with isa=jenkins-hbase4.apache.org/172.31.14.131:44481, startcode=1690240249660 2023-07-24 23:10:50,360 DEBUG [RS:0;jenkins-hbase4:45635] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 23:10:50,360 DEBUG [RS:1;jenkins-hbase4:44481] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 23:10:50,360 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35803] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,39913,1690240249840 2023-07-24 23:10:50,360 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35803,1690240249288] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 23:10:50,361 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35803,1690240249288] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-24 23:10:50,361 DEBUG [RS:2;jenkins-hbase4:39913] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1 2023-07-24 23:10:50,361 DEBUG [RS:2;jenkins-hbase4:39913] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:36591 2023-07-24 23:10:50,361 DEBUG [RS:2;jenkins-hbase4:39913] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33275 2023-07-24 23:10:50,362 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46551, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 23:10:50,362 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59399, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 23:10:50,362 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35803] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,45635,1690240249478 2023-07-24 23:10:50,362 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35803,1690240249288] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 23:10:50,362 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35803,1690240249288] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-24 23:10:50,362 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35803] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,44481,1690240249660 2023-07-24 23:10:50,362 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35803,1690240249288] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 23:10:50,363 DEBUG [RS:0;jenkins-hbase4:45635] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1 2023-07-24 23:10:50,363 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35803,1690240249288] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-24 23:10:50,363 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 23:10:50,363 DEBUG [RS:0;jenkins-hbase4:45635] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:36591 2023-07-24 23:10:50,363 DEBUG [RS:1;jenkins-hbase4:44481] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1 2023-07-24 23:10:50,363 DEBUG [RS:0;jenkins-hbase4:45635] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33275 2023-07-24 23:10:50,363 DEBUG [RS:1;jenkins-hbase4:44481] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:36591 2023-07-24 23:10:50,363 DEBUG [RS:2;jenkins-hbase4:39913] zookeeper.ZKUtil(162): regionserver:39913-0x1019999f0a60003, quorum=127.0.0.1:56120, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39913,1690240249840 2023-07-24 23:10:50,363 DEBUG [RS:1;jenkins-hbase4:44481] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33275 2023-07-24 23:10:50,364 WARN [RS:2;jenkins-hbase4:39913] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 23:10:50,364 INFO [RS:2;jenkins-hbase4:39913] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 23:10:50,364 DEBUG [RS:2;jenkins-hbase4:39913] regionserver.HRegionServer(1948): logDir=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/WALs/jenkins-hbase4.apache.org,39913,1690240249840 2023-07-24 23:10:50,373 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,45635,1690240249478] 2023-07-24 23:10:50,373 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,39913,1690240249840] 2023-07-24 23:10:50,374 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 23:10:50,374 DEBUG [RS:0;jenkins-hbase4:45635] zookeeper.ZKUtil(162): regionserver:45635-0x1019999f0a60001, quorum=127.0.0.1:56120, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45635,1690240249478 2023-07-24 23:10:50,374 WARN [RS:0;jenkins-hbase4:45635] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 23:10:50,374 INFO [RS:0;jenkins-hbase4:45635] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 23:10:50,374 DEBUG [RS:1;jenkins-hbase4:44481] zookeeper.ZKUtil(162): regionserver:44481-0x1019999f0a60002, quorum=127.0.0.1:56120, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44481,1690240249660 2023-07-24 23:10:50,374 DEBUG [RS:0;jenkins-hbase4:45635] regionserver.HRegionServer(1948): logDir=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/WALs/jenkins-hbase4.apache.org,45635,1690240249478 2023-07-24 23:10:50,374 WARN [RS:1;jenkins-hbase4:44481] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 23:10:50,374 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,44481,1690240249660] 2023-07-24 23:10:50,374 INFO [RS:1;jenkins-hbase4:44481] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 23:10:50,375 DEBUG [RS:1;jenkins-hbase4:44481] regionserver.HRegionServer(1948): logDir=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/WALs/jenkins-hbase4.apache.org,44481,1690240249660 2023-07-24 23:10:50,375 DEBUG [RS:2;jenkins-hbase4:39913] zookeeper.ZKUtil(162): regionserver:39913-0x1019999f0a60003, quorum=127.0.0.1:56120, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45635,1690240249478 2023-07-24 23:10:50,380 DEBUG [RS:2;jenkins-hbase4:39913] zookeeper.ZKUtil(162): regionserver:39913-0x1019999f0a60003, quorum=127.0.0.1:56120, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39913,1690240249840 2023-07-24 23:10:50,381 DEBUG [RS:2;jenkins-hbase4:39913] zookeeper.ZKUtil(162): regionserver:39913-0x1019999f0a60003, quorum=127.0.0.1:56120, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44481,1690240249660 2023-07-24 23:10:50,383 DEBUG [RS:2;jenkins-hbase4:39913] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 23:10:50,383 INFO [RS:2;jenkins-hbase4:39913] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 23:10:50,383 DEBUG [RS:0;jenkins-hbase4:45635] zookeeper.ZKUtil(162): regionserver:45635-0x1019999f0a60001, quorum=127.0.0.1:56120, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45635,1690240249478 2023-07-24 23:10:50,383 DEBUG [RS:0;jenkins-hbase4:45635] zookeeper.ZKUtil(162): regionserver:45635-0x1019999f0a60001, quorum=127.0.0.1:56120, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39913,1690240249840 2023-07-24 23:10:50,384 DEBUG [RS:0;jenkins-hbase4:45635] zookeeper.ZKUtil(162): regionserver:45635-0x1019999f0a60001, quorum=127.0.0.1:56120, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44481,1690240249660 2023-07-24 23:10:50,384 DEBUG [RS:1;jenkins-hbase4:44481] zookeeper.ZKUtil(162): regionserver:44481-0x1019999f0a60002, quorum=127.0.0.1:56120, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45635,1690240249478 2023-07-24 23:10:50,384 DEBUG [RS:1;jenkins-hbase4:44481] zookeeper.ZKUtil(162): regionserver:44481-0x1019999f0a60002, quorum=127.0.0.1:56120, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39913,1690240249840 2023-07-24 23:10:50,385 DEBUG [RS:0;jenkins-hbase4:45635] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 23:10:50,385 INFO [RS:0;jenkins-hbase4:45635] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 23:10:50,385 DEBUG [RS:1;jenkins-hbase4:44481] zookeeper.ZKUtil(162): regionserver:44481-0x1019999f0a60002, quorum=127.0.0.1:56120, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44481,1690240249660 2023-07-24 23:10:50,386 INFO [RS:2;jenkins-hbase4:39913] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 23:10:50,386 DEBUG [RS:1;jenkins-hbase4:44481] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 23:10:50,386 INFO [RS:1;jenkins-hbase4:44481] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 23:10:50,387 INFO [RS:2;jenkins-hbase4:39913] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 23:10:50,387 INFO [RS:2;jenkins-hbase4:39913] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:50,389 INFO [RS:0;jenkins-hbase4:45635] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 23:10:50,389 INFO [RS:1;jenkins-hbase4:44481] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 23:10:50,389 INFO [RS:2;jenkins-hbase4:39913] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 23:10:50,389 INFO [RS:0;jenkins-hbase4:45635] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 23:10:50,390 INFO [RS:1;jenkins-hbase4:44481] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 23:10:50,390 INFO [RS:0;jenkins-hbase4:45635] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:50,390 INFO [RS:1;jenkins-hbase4:44481] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:50,390 INFO [RS:0;jenkins-hbase4:45635] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 23:10:50,391 INFO [RS:1;jenkins-hbase4:44481] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 23:10:50,391 INFO [RS:2;jenkins-hbase4:39913] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:50,392 INFO [RS:0;jenkins-hbase4:45635] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:50,393 DEBUG [RS:2;jenkins-hbase4:39913] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:50,394 INFO [RS:1;jenkins-hbase4:44481] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:50,394 DEBUG [RS:2;jenkins-hbase4:39913] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:50,394 DEBUG [RS:0;jenkins-hbase4:45635] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:50,395 DEBUG [RS:2;jenkins-hbase4:39913] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:50,395 DEBUG [RS:0;jenkins-hbase4:45635] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:50,395 DEBUG [RS:1;jenkins-hbase4:44481] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:50,395 DEBUG [RS:0;jenkins-hbase4:45635] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:50,395 DEBUG [RS:2;jenkins-hbase4:39913] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:50,395 DEBUG [RS:0;jenkins-hbase4:45635] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:50,395 DEBUG [RS:2;jenkins-hbase4:39913] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:50,395 DEBUG [RS:1;jenkins-hbase4:44481] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:50,395 DEBUG [RS:2;jenkins-hbase4:39913] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 23:10:50,395 DEBUG [RS:1;jenkins-hbase4:44481] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:50,395 DEBUG [RS:2;jenkins-hbase4:39913] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:50,395 DEBUG [RS:0;jenkins-hbase4:45635] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:50,396 DEBUG [RS:2;jenkins-hbase4:39913] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:50,396 DEBUG [RS:0;jenkins-hbase4:45635] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 23:10:50,396 DEBUG [RS:2;jenkins-hbase4:39913] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:50,395 DEBUG [RS:1;jenkins-hbase4:44481] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:50,396 DEBUG [RS:0;jenkins-hbase4:45635] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:50,396 DEBUG [RS:1;jenkins-hbase4:44481] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:50,396 DEBUG [RS:0;jenkins-hbase4:45635] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:50,396 DEBUG [RS:1;jenkins-hbase4:44481] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 23:10:50,396 DEBUG [RS:2;jenkins-hbase4:39913] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:50,396 DEBUG [RS:1;jenkins-hbase4:44481] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:50,396 DEBUG [RS:0;jenkins-hbase4:45635] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:50,396 DEBUG [RS:1;jenkins-hbase4:44481] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:50,396 DEBUG [RS:0;jenkins-hbase4:45635] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:50,396 DEBUG [RS:1;jenkins-hbase4:44481] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:50,396 DEBUG [RS:1;jenkins-hbase4:44481] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:50,404 INFO [RS:2;jenkins-hbase4:39913] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:50,404 INFO [RS:2;jenkins-hbase4:39913] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:50,404 INFO [RS:2;jenkins-hbase4:39913] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:50,404 INFO [RS:2;jenkins-hbase4:39913] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:50,409 INFO [RS:0;jenkins-hbase4:45635] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:50,409 INFO [RS:1;jenkins-hbase4:44481] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:50,409 INFO [RS:0;jenkins-hbase4:45635] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:50,410 INFO [RS:1;jenkins-hbase4:44481] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:50,410 INFO [RS:0;jenkins-hbase4:45635] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:50,410 INFO [RS:1;jenkins-hbase4:44481] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:50,410 INFO [RS:0;jenkins-hbase4:45635] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:50,410 INFO [RS:1;jenkins-hbase4:44481] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:50,425 INFO [RS:0;jenkins-hbase4:45635] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 23:10:50,426 INFO [RS:0;jenkins-hbase4:45635] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45635,1690240249478-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:50,426 INFO [RS:1;jenkins-hbase4:44481] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 23:10:50,427 INFO [RS:1;jenkins-hbase4:44481] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44481,1690240249660-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:50,427 INFO [RS:2;jenkins-hbase4:39913] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 23:10:50,427 INFO [RS:2;jenkins-hbase4:39913] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39913,1690240249840-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:50,438 INFO [RS:0;jenkins-hbase4:45635] regionserver.Replication(203): jenkins-hbase4.apache.org,45635,1690240249478 started 2023-07-24 23:10:50,438 INFO [RS:0;jenkins-hbase4:45635] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,45635,1690240249478, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:45635, sessionid=0x1019999f0a60001 2023-07-24 23:10:50,439 DEBUG [RS:0;jenkins-hbase4:45635] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 23:10:50,439 DEBUG [RS:0;jenkins-hbase4:45635] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,45635,1690240249478 2023-07-24 23:10:50,439 DEBUG [RS:0;jenkins-hbase4:45635] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,45635,1690240249478' 2023-07-24 23:10:50,439 DEBUG [RS:0;jenkins-hbase4:45635] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 23:10:50,439 DEBUG [RS:0;jenkins-hbase4:45635] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 23:10:50,439 INFO [RS:1;jenkins-hbase4:44481] regionserver.Replication(203): jenkins-hbase4.apache.org,44481,1690240249660 started 2023-07-24 23:10:50,439 INFO [RS:1;jenkins-hbase4:44481] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,44481,1690240249660, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:44481, sessionid=0x1019999f0a60002 2023-07-24 23:10:50,439 DEBUG [RS:1;jenkins-hbase4:44481] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 23:10:50,439 DEBUG [RS:1;jenkins-hbase4:44481] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,44481,1690240249660 2023-07-24 23:10:50,440 DEBUG [RS:1;jenkins-hbase4:44481] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44481,1690240249660' 2023-07-24 23:10:50,440 DEBUG [RS:1;jenkins-hbase4:44481] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 23:10:50,440 DEBUG [RS:0;jenkins-hbase4:45635] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 23:10:50,440 DEBUG [RS:0;jenkins-hbase4:45635] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 23:10:50,440 DEBUG [RS:0;jenkins-hbase4:45635] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,45635,1690240249478 2023-07-24 23:10:50,440 DEBUG [RS:0;jenkins-hbase4:45635] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,45635,1690240249478' 2023-07-24 23:10:50,440 DEBUG [RS:0;jenkins-hbase4:45635] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 23:10:50,440 DEBUG [RS:1;jenkins-hbase4:44481] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 23:10:50,440 DEBUG [RS:0;jenkins-hbase4:45635] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 23:10:50,440 DEBUG [RS:1;jenkins-hbase4:44481] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 23:10:50,441 DEBUG [RS:1;jenkins-hbase4:44481] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 23:10:50,441 DEBUG [RS:1;jenkins-hbase4:44481] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,44481,1690240249660 2023-07-24 23:10:50,441 DEBUG [RS:1;jenkins-hbase4:44481] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44481,1690240249660' 2023-07-24 23:10:50,441 DEBUG [RS:1;jenkins-hbase4:44481] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 23:10:50,441 DEBUG [RS:0;jenkins-hbase4:45635] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 23:10:50,441 INFO [RS:0;jenkins-hbase4:45635] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-24 23:10:50,441 DEBUG [RS:1;jenkins-hbase4:44481] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 23:10:50,441 DEBUG [RS:1;jenkins-hbase4:44481] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 23:10:50,441 INFO [RS:1;jenkins-hbase4:44481] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-24 23:10:50,442 INFO [RS:2;jenkins-hbase4:39913] regionserver.Replication(203): jenkins-hbase4.apache.org,39913,1690240249840 started 2023-07-24 23:10:50,442 INFO [RS:2;jenkins-hbase4:39913] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,39913,1690240249840, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:39913, sessionid=0x1019999f0a60003 2023-07-24 23:10:50,442 DEBUG [RS:2;jenkins-hbase4:39913] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 23:10:50,443 DEBUG [RS:2;jenkins-hbase4:39913] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,39913,1690240249840 2023-07-24 23:10:50,443 DEBUG [RS:2;jenkins-hbase4:39913] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39913,1690240249840' 2023-07-24 23:10:50,443 DEBUG [RS:2;jenkins-hbase4:39913] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 23:10:50,443 DEBUG [RS:2;jenkins-hbase4:39913] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 23:10:50,443 DEBUG [RS:2;jenkins-hbase4:39913] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 23:10:50,443 DEBUG [RS:2;jenkins-hbase4:39913] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 23:10:50,443 DEBUG [RS:2;jenkins-hbase4:39913] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,39913,1690240249840 2023-07-24 23:10:50,443 DEBUG [RS:2;jenkins-hbase4:39913] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39913,1690240249840' 2023-07-24 23:10:50,444 DEBUG [RS:2;jenkins-hbase4:39913] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 23:10:50,444 INFO [RS:1;jenkins-hbase4:44481] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:50,444 INFO [RS:0;jenkins-hbase4:45635] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:50,444 DEBUG [RS:2;jenkins-hbase4:39913] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 23:10:50,445 DEBUG [RS:1;jenkins-hbase4:44481] zookeeper.ZKUtil(398): regionserver:44481-0x1019999f0a60002, quorum=127.0.0.1:56120, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-24 23:10:50,445 DEBUG [RS:2;jenkins-hbase4:39913] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 23:10:50,445 DEBUG [RS:0;jenkins-hbase4:45635] zookeeper.ZKUtil(398): regionserver:45635-0x1019999f0a60001, quorum=127.0.0.1:56120, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-24 23:10:50,445 INFO [RS:0;jenkins-hbase4:45635] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-24 23:10:50,445 INFO [RS:1;jenkins-hbase4:44481] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-24 23:10:50,445 INFO [RS:2;jenkins-hbase4:39913] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-24 23:10:50,446 INFO [RS:2;jenkins-hbase4:39913] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:50,446 INFO [RS:0;jenkins-hbase4:45635] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:50,446 INFO [RS:1;jenkins-hbase4:44481] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:50,446 DEBUG [RS:2;jenkins-hbase4:39913] zookeeper.ZKUtil(398): regionserver:39913-0x1019999f0a60003, quorum=127.0.0.1:56120, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-24 23:10:50,446 INFO [RS:2;jenkins-hbase4:39913] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-24 23:10:50,446 INFO [RS:2;jenkins-hbase4:39913] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:50,446 INFO [RS:1;jenkins-hbase4:44481] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:50,446 INFO [RS:2;jenkins-hbase4:39913] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:50,446 INFO [RS:0;jenkins-hbase4:45635] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:50,492 DEBUG [jenkins-hbase4:35803] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-24 23:10:50,492 DEBUG [jenkins-hbase4:35803] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 23:10:50,493 DEBUG [jenkins-hbase4:35803] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 23:10:50,493 DEBUG [jenkins-hbase4:35803] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 23:10:50,493 DEBUG [jenkins-hbase4:35803] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 23:10:50,493 DEBUG [jenkins-hbase4:35803] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 23:10:50,494 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,39913,1690240249840, state=OPENING 2023-07-24 23:10:50,496 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-24 23:10:50,497 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 23:10:50,500 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,39913,1690240249840}] 2023-07-24 23:10:50,500 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 23:10:50,545 WARN [ReadOnlyZKClient-127.0.0.1:56120@0x0de4f153] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-24 23:10:50,546 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35803,1690240249288] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 23:10:50,548 INFO [RS-EventLoopGroup-11-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52198, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 23:10:50,548 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=39913] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:52198 deadline: 1690240310548, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,39913,1690240249840 2023-07-24 23:10:50,551 INFO [RS:1;jenkins-hbase4:44481] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44481%2C1690240249660, suffix=, logDir=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/WALs/jenkins-hbase4.apache.org,44481,1690240249660, archiveDir=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/oldWALs, maxLogs=32 2023-07-24 23:10:50,551 INFO [RS:0;jenkins-hbase4:45635] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C45635%2C1690240249478, suffix=, logDir=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/WALs/jenkins-hbase4.apache.org,45635,1690240249478, archiveDir=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/oldWALs, maxLogs=32 2023-07-24 23:10:50,552 INFO [RS:2;jenkins-hbase4:39913] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39913%2C1690240249840, suffix=, logDir=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/WALs/jenkins-hbase4.apache.org,39913,1690240249840, archiveDir=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/oldWALs, maxLogs=32 2023-07-24 23:10:50,578 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43569,DS-cd008b3b-2212-4b6b-bb22-79eb4b6fc3f7,DISK] 2023-07-24 23:10:50,578 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35397,DS-d7b00978-f4fd-44e2-925f-b53f24466aae,DISK] 2023-07-24 23:10:50,578 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37755,DS-321210a8-f68a-41e1-91e5-a565240f06a8,DISK] 2023-07-24 23:10:50,592 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37755,DS-321210a8-f68a-41e1-91e5-a565240f06a8,DISK] 2023-07-24 23:10:50,596 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35397,DS-d7b00978-f4fd-44e2-925f-b53f24466aae,DISK] 2023-07-24 23:10:50,596 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43569,DS-cd008b3b-2212-4b6b-bb22-79eb4b6fc3f7,DISK] 2023-07-24 23:10:50,601 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35397,DS-d7b00978-f4fd-44e2-925f-b53f24466aae,DISK] 2023-07-24 23:10:50,601 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37755,DS-321210a8-f68a-41e1-91e5-a565240f06a8,DISK] 2023-07-24 23:10:50,601 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43569,DS-cd008b3b-2212-4b6b-bb22-79eb4b6fc3f7,DISK] 2023-07-24 23:10:50,603 INFO [RS:2;jenkins-hbase4:39913] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/WALs/jenkins-hbase4.apache.org,39913,1690240249840/jenkins-hbase4.apache.org%2C39913%2C1690240249840.1690240250557 2023-07-24 23:10:50,604 INFO [RS:1;jenkins-hbase4:44481] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/WALs/jenkins-hbase4.apache.org,44481,1690240249660/jenkins-hbase4.apache.org%2C44481%2C1690240249660.1690240250555 2023-07-24 23:10:50,611 DEBUG [RS:1;jenkins-hbase4:44481] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43569,DS-cd008b3b-2212-4b6b-bb22-79eb4b6fc3f7,DISK], DatanodeInfoWithStorage[127.0.0.1:35397,DS-d7b00978-f4fd-44e2-925f-b53f24466aae,DISK], DatanodeInfoWithStorage[127.0.0.1:37755,DS-321210a8-f68a-41e1-91e5-a565240f06a8,DISK]] 2023-07-24 23:10:50,611 INFO [RS:0;jenkins-hbase4:45635] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/WALs/jenkins-hbase4.apache.org,45635,1690240249478/jenkins-hbase4.apache.org%2C45635%2C1690240249478.1690240250557 2023-07-24 23:10:50,611 DEBUG [RS:2;jenkins-hbase4:39913] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43569,DS-cd008b3b-2212-4b6b-bb22-79eb4b6fc3f7,DISK], DatanodeInfoWithStorage[127.0.0.1:37755,DS-321210a8-f68a-41e1-91e5-a565240f06a8,DISK], DatanodeInfoWithStorage[127.0.0.1:35397,DS-d7b00978-f4fd-44e2-925f-b53f24466aae,DISK]] 2023-07-24 23:10:50,614 DEBUG [RS:0;jenkins-hbase4:45635] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43569,DS-cd008b3b-2212-4b6b-bb22-79eb4b6fc3f7,DISK], DatanodeInfoWithStorage[127.0.0.1:37755,DS-321210a8-f68a-41e1-91e5-a565240f06a8,DISK], DatanodeInfoWithStorage[127.0.0.1:35397,DS-d7b00978-f4fd-44e2-925f-b53f24466aae,DISK]] 2023-07-24 23:10:50,655 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,39913,1690240249840 2023-07-24 23:10:50,657 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 23:10:50,659 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52214, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 23:10:50,664 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-24 23:10:50,664 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 23:10:50,666 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39913%2C1690240249840.meta, suffix=.meta, logDir=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/WALs/jenkins-hbase4.apache.org,39913,1690240249840, archiveDir=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/oldWALs, maxLogs=32 2023-07-24 23:10:50,687 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37755,DS-321210a8-f68a-41e1-91e5-a565240f06a8,DISK] 2023-07-24 23:10:50,688 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35397,DS-d7b00978-f4fd-44e2-925f-b53f24466aae,DISK] 2023-07-24 23:10:50,688 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43569,DS-cd008b3b-2212-4b6b-bb22-79eb4b6fc3f7,DISK] 2023-07-24 23:10:50,699 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/WALs/jenkins-hbase4.apache.org,39913,1690240249840/jenkins-hbase4.apache.org%2C39913%2C1690240249840.meta.1690240250668.meta 2023-07-24 23:10:50,700 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37755,DS-321210a8-f68a-41e1-91e5-a565240f06a8,DISK], DatanodeInfoWithStorage[127.0.0.1:35397,DS-d7b00978-f4fd-44e2-925f-b53f24466aae,DISK], DatanodeInfoWithStorage[127.0.0.1:43569,DS-cd008b3b-2212-4b6b-bb22-79eb4b6fc3f7,DISK]] 2023-07-24 23:10:50,700 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-24 23:10:50,700 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 23:10:50,700 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-24 23:10:50,701 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-24 23:10:50,701 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-24 23:10:50,701 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:50,701 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-24 23:10:50,701 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-24 23:10:50,703 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-24 23:10:50,705 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/meta/1588230740/info 2023-07-24 23:10:50,705 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/meta/1588230740/info 2023-07-24 23:10:50,706 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-24 23:10:50,706 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:50,707 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-24 23:10:50,708 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/meta/1588230740/rep_barrier 2023-07-24 23:10:50,708 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/meta/1588230740/rep_barrier 2023-07-24 23:10:50,708 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-24 23:10:50,709 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:50,709 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-24 23:10:50,710 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/meta/1588230740/table 2023-07-24 23:10:50,710 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/meta/1588230740/table 2023-07-24 23:10:50,710 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-24 23:10:50,711 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:50,712 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/meta/1588230740 2023-07-24 23:10:50,713 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/meta/1588230740 2023-07-24 23:10:50,717 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-24 23:10:50,718 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-24 23:10:50,719 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10988533600, jitterRate=0.023386940360069275}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-24 23:10:50,720 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-24 23:10:50,720 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1690240250655 2023-07-24 23:10:50,725 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-24 23:10:50,725 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-24 23:10:50,726 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,39913,1690240249840, state=OPEN 2023-07-24 23:10:50,728 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-24 23:10:50,728 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 23:10:50,730 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-24 23:10:50,730 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,39913,1690240249840 in 231 msec 2023-07-24 23:10:50,732 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-24 23:10:50,732 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 392 msec 2023-07-24 23:10:50,734 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 498 msec 2023-07-24 23:10:50,734 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1690240250734, completionTime=-1 2023-07-24 23:10:50,734 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-24 23:10:50,734 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-24 23:10:50,738 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-24 23:10:50,739 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1690240310738 2023-07-24 23:10:50,739 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1690240370739 2023-07-24 23:10:50,739 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 4 msec 2023-07-24 23:10:50,744 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35803,1690240249288-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:50,745 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35803,1690240249288-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:50,745 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35803,1690240249288-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:50,745 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:35803, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:50,745 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:50,745 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-24 23:10:50,745 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-24 23:10:50,746 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-24 23:10:50,746 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-24 23:10:50,748 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 23:10:50,748 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 23:10:50,750 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/.tmp/data/hbase/namespace/0507a7bbe7478baa73a6f4cefc1b3015 2023-07-24 23:10:50,750 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/.tmp/data/hbase/namespace/0507a7bbe7478baa73a6f4cefc1b3015 empty. 2023-07-24 23:10:50,751 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/.tmp/data/hbase/namespace/0507a7bbe7478baa73a6f4cefc1b3015 2023-07-24 23:10:50,751 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-24 23:10:50,771 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-24 23:10:50,773 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 0507a7bbe7478baa73a6f4cefc1b3015, NAME => 'hbase:namespace,,1690240250745.0507a7bbe7478baa73a6f4cefc1b3015.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/.tmp 2023-07-24 23:10:50,784 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690240250745.0507a7bbe7478baa73a6f4cefc1b3015.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:50,784 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 0507a7bbe7478baa73a6f4cefc1b3015, disabling compactions & flushes 2023-07-24 23:10:50,784 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690240250745.0507a7bbe7478baa73a6f4cefc1b3015. 2023-07-24 23:10:50,784 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690240250745.0507a7bbe7478baa73a6f4cefc1b3015. 2023-07-24 23:10:50,784 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690240250745.0507a7bbe7478baa73a6f4cefc1b3015. after waiting 0 ms 2023-07-24 23:10:50,784 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690240250745.0507a7bbe7478baa73a6f4cefc1b3015. 2023-07-24 23:10:50,784 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690240250745.0507a7bbe7478baa73a6f4cefc1b3015. 2023-07-24 23:10:50,784 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 0507a7bbe7478baa73a6f4cefc1b3015: 2023-07-24 23:10:50,786 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 23:10:50,787 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1690240250745.0507a7bbe7478baa73a6f4cefc1b3015.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690240250787"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240250787"}]},"ts":"1690240250787"} 2023-07-24 23:10:50,790 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 23:10:50,791 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 23:10:50,791 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690240250791"}]},"ts":"1690240250791"} 2023-07-24 23:10:50,792 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-24 23:10:50,797 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 23:10:50,797 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 23:10:50,797 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 23:10:50,797 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 23:10:50,797 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 23:10:50,797 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=0507a7bbe7478baa73a6f4cefc1b3015, ASSIGN}] 2023-07-24 23:10:50,800 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=0507a7bbe7478baa73a6f4cefc1b3015, ASSIGN 2023-07-24 23:10:50,801 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=0507a7bbe7478baa73a6f4cefc1b3015, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44481,1690240249660; forceNewPlan=false, retain=false 2023-07-24 23:10:50,853 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35803,1690240249288] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 23:10:50,855 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35803,1690240249288] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-24 23:10:50,857 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 23:10:50,858 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 23:10:50,859 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/.tmp/data/hbase/rsgroup/d0bb6bb01fe8e0a6afcf674e8c4a5e14 2023-07-24 23:10:50,860 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/.tmp/data/hbase/rsgroup/d0bb6bb01fe8e0a6afcf674e8c4a5e14 empty. 2023-07-24 23:10:50,860 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/.tmp/data/hbase/rsgroup/d0bb6bb01fe8e0a6afcf674e8c4a5e14 2023-07-24 23:10:50,860 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-24 23:10:50,882 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-24 23:10:50,883 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => d0bb6bb01fe8e0a6afcf674e8c4a5e14, NAME => 'hbase:rsgroup,,1690240250853.d0bb6bb01fe8e0a6afcf674e8c4a5e14.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/.tmp 2023-07-24 23:10:50,905 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690240250853.d0bb6bb01fe8e0a6afcf674e8c4a5e14.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:50,905 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing d0bb6bb01fe8e0a6afcf674e8c4a5e14, disabling compactions & flushes 2023-07-24 23:10:50,905 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690240250853.d0bb6bb01fe8e0a6afcf674e8c4a5e14. 2023-07-24 23:10:50,905 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690240250853.d0bb6bb01fe8e0a6afcf674e8c4a5e14. 2023-07-24 23:10:50,905 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690240250853.d0bb6bb01fe8e0a6afcf674e8c4a5e14. after waiting 0 ms 2023-07-24 23:10:50,906 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690240250853.d0bb6bb01fe8e0a6afcf674e8c4a5e14. 2023-07-24 23:10:50,906 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690240250853.d0bb6bb01fe8e0a6afcf674e8c4a5e14. 2023-07-24 23:10:50,906 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for d0bb6bb01fe8e0a6afcf674e8c4a5e14: 2023-07-24 23:10:50,908 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 23:10:50,909 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1690240250853.d0bb6bb01fe8e0a6afcf674e8c4a5e14.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690240250909"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240250909"}]},"ts":"1690240250909"} 2023-07-24 23:10:50,911 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 23:10:50,911 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 23:10:50,912 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690240250911"}]},"ts":"1690240250911"} 2023-07-24 23:10:50,913 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-24 23:10:50,916 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 23:10:50,916 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 23:10:50,916 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 23:10:50,916 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 23:10:50,916 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 23:10:50,916 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=d0bb6bb01fe8e0a6afcf674e8c4a5e14, ASSIGN}] 2023-07-24 23:10:50,919 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=d0bb6bb01fe8e0a6afcf674e8c4a5e14, ASSIGN 2023-07-24 23:10:50,920 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=d0bb6bb01fe8e0a6afcf674e8c4a5e14, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44481,1690240249660; forceNewPlan=false, retain=false 2023-07-24 23:10:50,920 INFO [jenkins-hbase4:35803] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-24 23:10:50,922 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=0507a7bbe7478baa73a6f4cefc1b3015, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44481,1690240249660 2023-07-24 23:10:50,922 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690240250745.0507a7bbe7478baa73a6f4cefc1b3015.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690240250922"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240250922"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240250922"}]},"ts":"1690240250922"} 2023-07-24 23:10:50,922 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=d0bb6bb01fe8e0a6afcf674e8c4a5e14, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44481,1690240249660 2023-07-24 23:10:50,922 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690240250853.d0bb6bb01fe8e0a6afcf674e8c4a5e14.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690240250922"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240250922"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240250922"}]},"ts":"1690240250922"} 2023-07-24 23:10:50,923 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=5, state=RUNNABLE; OpenRegionProcedure 0507a7bbe7478baa73a6f4cefc1b3015, server=jenkins-hbase4.apache.org,44481,1690240249660}] 2023-07-24 23:10:50,923 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure d0bb6bb01fe8e0a6afcf674e8c4a5e14, server=jenkins-hbase4.apache.org,44481,1690240249660}] 2023-07-24 23:10:51,075 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,44481,1690240249660 2023-07-24 23:10:51,075 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 23:10:51,077 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50264, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 23:10:51,081 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1690240250745.0507a7bbe7478baa73a6f4cefc1b3015. 2023-07-24 23:10:51,081 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0507a7bbe7478baa73a6f4cefc1b3015, NAME => 'hbase:namespace,,1690240250745.0507a7bbe7478baa73a6f4cefc1b3015.', STARTKEY => '', ENDKEY => ''} 2023-07-24 23:10:51,081 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 0507a7bbe7478baa73a6f4cefc1b3015 2023-07-24 23:10:51,082 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690240250745.0507a7bbe7478baa73a6f4cefc1b3015.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:51,082 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 0507a7bbe7478baa73a6f4cefc1b3015 2023-07-24 23:10:51,082 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 0507a7bbe7478baa73a6f4cefc1b3015 2023-07-24 23:10:51,083 INFO [StoreOpener-0507a7bbe7478baa73a6f4cefc1b3015-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 0507a7bbe7478baa73a6f4cefc1b3015 2023-07-24 23:10:51,084 DEBUG [StoreOpener-0507a7bbe7478baa73a6f4cefc1b3015-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/namespace/0507a7bbe7478baa73a6f4cefc1b3015/info 2023-07-24 23:10:51,084 DEBUG [StoreOpener-0507a7bbe7478baa73a6f4cefc1b3015-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/namespace/0507a7bbe7478baa73a6f4cefc1b3015/info 2023-07-24 23:10:51,085 INFO [StoreOpener-0507a7bbe7478baa73a6f4cefc1b3015-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0507a7bbe7478baa73a6f4cefc1b3015 columnFamilyName info 2023-07-24 23:10:51,085 INFO [StoreOpener-0507a7bbe7478baa73a6f4cefc1b3015-1] regionserver.HStore(310): Store=0507a7bbe7478baa73a6f4cefc1b3015/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:51,086 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/namespace/0507a7bbe7478baa73a6f4cefc1b3015 2023-07-24 23:10:51,086 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/namespace/0507a7bbe7478baa73a6f4cefc1b3015 2023-07-24 23:10:51,089 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 0507a7bbe7478baa73a6f4cefc1b3015 2023-07-24 23:10:51,092 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/namespace/0507a7bbe7478baa73a6f4cefc1b3015/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 23:10:51,092 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 0507a7bbe7478baa73a6f4cefc1b3015; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10775376480, jitterRate=0.003535136580467224}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 23:10:51,092 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 0507a7bbe7478baa73a6f4cefc1b3015: 2023-07-24 23:10:51,093 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1690240250745.0507a7bbe7478baa73a6f4cefc1b3015., pid=8, masterSystemTime=1690240251075 2023-07-24 23:10:51,097 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1690240250745.0507a7bbe7478baa73a6f4cefc1b3015. 2023-07-24 23:10:51,097 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1690240250745.0507a7bbe7478baa73a6f4cefc1b3015. 2023-07-24 23:10:51,097 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1690240250853.d0bb6bb01fe8e0a6afcf674e8c4a5e14. 2023-07-24 23:10:51,097 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d0bb6bb01fe8e0a6afcf674e8c4a5e14, NAME => 'hbase:rsgroup,,1690240250853.d0bb6bb01fe8e0a6afcf674e8c4a5e14.', STARTKEY => '', ENDKEY => ''} 2023-07-24 23:10:51,098 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=0507a7bbe7478baa73a6f4cefc1b3015, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44481,1690240249660 2023-07-24 23:10:51,098 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 23:10:51,098 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1690240250745.0507a7bbe7478baa73a6f4cefc1b3015.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690240251097"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690240251097"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690240251097"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690240251097"}]},"ts":"1690240251097"} 2023-07-24 23:10:51,098 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1690240250853.d0bb6bb01fe8e0a6afcf674e8c4a5e14. service=MultiRowMutationService 2023-07-24 23:10:51,098 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-24 23:10:51,098 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup d0bb6bb01fe8e0a6afcf674e8c4a5e14 2023-07-24 23:10:51,098 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690240250853.d0bb6bb01fe8e0a6afcf674e8c4a5e14.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:51,098 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d0bb6bb01fe8e0a6afcf674e8c4a5e14 2023-07-24 23:10:51,098 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d0bb6bb01fe8e0a6afcf674e8c4a5e14 2023-07-24 23:10:51,099 INFO [StoreOpener-d0bb6bb01fe8e0a6afcf674e8c4a5e14-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region d0bb6bb01fe8e0a6afcf674e8c4a5e14 2023-07-24 23:10:51,100 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=5 2023-07-24 23:10:51,101 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=5, state=SUCCESS; OpenRegionProcedure 0507a7bbe7478baa73a6f4cefc1b3015, server=jenkins-hbase4.apache.org,44481,1690240249660 in 176 msec 2023-07-24 23:10:51,101 DEBUG [StoreOpener-d0bb6bb01fe8e0a6afcf674e8c4a5e14-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/rsgroup/d0bb6bb01fe8e0a6afcf674e8c4a5e14/m 2023-07-24 23:10:51,101 DEBUG [StoreOpener-d0bb6bb01fe8e0a6afcf674e8c4a5e14-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/rsgroup/d0bb6bb01fe8e0a6afcf674e8c4a5e14/m 2023-07-24 23:10:51,101 INFO [StoreOpener-d0bb6bb01fe8e0a6afcf674e8c4a5e14-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d0bb6bb01fe8e0a6afcf674e8c4a5e14 columnFamilyName m 2023-07-24 23:10:51,102 INFO [StoreOpener-d0bb6bb01fe8e0a6afcf674e8c4a5e14-1] regionserver.HStore(310): Store=d0bb6bb01fe8e0a6afcf674e8c4a5e14/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:51,102 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-24 23:10:51,103 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=0507a7bbe7478baa73a6f4cefc1b3015, ASSIGN in 303 msec 2023-07-24 23:10:51,103 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/rsgroup/d0bb6bb01fe8e0a6afcf674e8c4a5e14 2023-07-24 23:10:51,103 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 23:10:51,103 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690240251103"}]},"ts":"1690240251103"} 2023-07-24 23:10:51,104 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/rsgroup/d0bb6bb01fe8e0a6afcf674e8c4a5e14 2023-07-24 23:10:51,105 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-24 23:10:51,108 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 23:10:51,108 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d0bb6bb01fe8e0a6afcf674e8c4a5e14 2023-07-24 23:10:51,109 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 363 msec 2023-07-24 23:10:51,110 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/rsgroup/d0bb6bb01fe8e0a6afcf674e8c4a5e14/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 23:10:51,111 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d0bb6bb01fe8e0a6afcf674e8c4a5e14; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@5983e7cd, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 23:10:51,111 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d0bb6bb01fe8e0a6afcf674e8c4a5e14: 2023-07-24 23:10:51,111 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1690240250853.d0bb6bb01fe8e0a6afcf674e8c4a5e14., pid=9, masterSystemTime=1690240251075 2023-07-24 23:10:51,113 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1690240250853.d0bb6bb01fe8e0a6afcf674e8c4a5e14. 2023-07-24 23:10:51,113 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1690240250853.d0bb6bb01fe8e0a6afcf674e8c4a5e14. 2023-07-24 23:10:51,113 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=d0bb6bb01fe8e0a6afcf674e8c4a5e14, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44481,1690240249660 2023-07-24 23:10:51,113 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1690240250853.d0bb6bb01fe8e0a6afcf674e8c4a5e14.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690240251113"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690240251113"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690240251113"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690240251113"}]},"ts":"1690240251113"} 2023-07-24 23:10:51,116 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-24 23:10:51,116 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure d0bb6bb01fe8e0a6afcf674e8c4a5e14, server=jenkins-hbase4.apache.org,44481,1690240249660 in 191 msec 2023-07-24 23:10:51,118 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=6 2023-07-24 23:10:51,118 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=d0bb6bb01fe8e0a6afcf674e8c4a5e14, ASSIGN in 200 msec 2023-07-24 23:10:51,118 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 23:10:51,119 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690240251118"}]},"ts":"1690240251118"} 2023-07-24 23:10:51,121 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-24 23:10:51,124 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 23:10:51,125 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 271 msec 2023-07-24 23:10:51,148 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-24 23:10:51,149 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-24 23:10:51,150 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 23:10:51,153 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 23:10:51,155 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50280, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 23:10:51,159 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-24 23:10:51,168 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 23:10:51,171 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 11 msec 2023-07-24 23:10:51,173 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35803,1690240249288] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-24 23:10:51,173 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35803,1690240249288] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-24 23:10:51,179 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 23:10:51,179 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35803,1690240249288] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:51,181 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35803,1690240249288] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-24 23:10:51,181 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-24 23:10:51,183 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35803,1690240249288] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-24 23:10:51,190 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 23:10:51,194 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 13 msec 2023-07-24 23:10:51,206 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-24 23:10:51,210 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-24 23:10:51,210 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.185sec 2023-07-24 23:10:51,211 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-24 23:10:51,211 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 23:10:51,215 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-24 23:10:51,215 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-24 23:10:51,219 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-24 23:10:51,219 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 23:10:51,220 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 23:10:51,222 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/.tmp/data/hbase/quota/ae8cb358246724b2add803ddae0b9cb8 2023-07-24 23:10:51,223 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/.tmp/data/hbase/quota/ae8cb358246724b2add803ddae0b9cb8 empty. 2023-07-24 23:10:51,223 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/.tmp/data/hbase/quota/ae8cb358246724b2add803ddae0b9cb8 2023-07-24 23:10:51,223 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-24 23:10:51,225 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-24 23:10:51,225 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-24 23:10:51,228 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:51,228 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:51,228 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-24 23:10:51,228 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-24 23:10:51,228 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35803,1690240249288-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-24 23:10:51,228 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35803,1690240249288-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-24 23:10:51,233 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-24 23:10:51,243 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-24 23:10:51,245 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => ae8cb358246724b2add803ddae0b9cb8, NAME => 'hbase:quota,,1690240251211.ae8cb358246724b2add803ddae0b9cb8.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/.tmp 2023-07-24 23:10:51,256 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1690240251211.ae8cb358246724b2add803ddae0b9cb8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:51,256 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing ae8cb358246724b2add803ddae0b9cb8, disabling compactions & flushes 2023-07-24 23:10:51,256 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1690240251211.ae8cb358246724b2add803ddae0b9cb8. 2023-07-24 23:10:51,256 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1690240251211.ae8cb358246724b2add803ddae0b9cb8. 2023-07-24 23:10:51,256 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1690240251211.ae8cb358246724b2add803ddae0b9cb8. after waiting 0 ms 2023-07-24 23:10:51,256 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1690240251211.ae8cb358246724b2add803ddae0b9cb8. 2023-07-24 23:10:51,256 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1690240251211.ae8cb358246724b2add803ddae0b9cb8. 2023-07-24 23:10:51,256 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for ae8cb358246724b2add803ddae0b9cb8: 2023-07-24 23:10:51,258 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 23:10:51,259 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1690240251211.ae8cb358246724b2add803ddae0b9cb8.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1690240251259"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240251259"}]},"ts":"1690240251259"} 2023-07-24 23:10:51,261 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 23:10:51,261 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 23:10:51,261 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690240251261"}]},"ts":"1690240251261"} 2023-07-24 23:10:51,262 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-24 23:10:51,265 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 23:10:51,266 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 23:10:51,266 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 23:10:51,266 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 23:10:51,266 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 23:10:51,266 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=ae8cb358246724b2add803ddae0b9cb8, ASSIGN}] 2023-07-24 23:10:51,267 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=ae8cb358246724b2add803ddae0b9cb8, ASSIGN 2023-07-24 23:10:51,267 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=ae8cb358246724b2add803ddae0b9cb8, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39913,1690240249840; forceNewPlan=false, retain=false 2023-07-24 23:10:51,310 DEBUG [Listener at localhost/36721] zookeeper.ReadOnlyZKClient(139): Connect 0x3d72335e to 127.0.0.1:56120 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 23:10:51,315 DEBUG [Listener at localhost/36721] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@739ec01b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 23:10:51,317 DEBUG [hconnection-0x368055c6-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 23:10:51,318 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52220, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 23:10:51,320 INFO [Listener at localhost/36721] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,35803,1690240249288 2023-07-24 23:10:51,320 INFO [Listener at localhost/36721] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:51,322 DEBUG [Listener at localhost/36721] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-24 23:10:51,324 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39098, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-24 23:10:51,328 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-24 23:10:51,328 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 23:10:51,328 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35803] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-24 23:10:51,329 DEBUG [Listener at localhost/36721] zookeeper.ReadOnlyZKClient(139): Connect 0x0fb15e46 to 127.0.0.1:56120 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 23:10:51,334 DEBUG [Listener at localhost/36721] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3bab1413, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 23:10:51,334 INFO [Listener at localhost/36721] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:56120 2023-07-24 23:10:51,338 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 23:10:51,339 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1019999f0a6000a connected 2023-07-24 23:10:51,342 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35803] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'np1', hbase.namespace.quota.maxregions => '5', hbase.namespace.quota.maxtables => '2'} 2023-07-24 23:10:51,344 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35803] procedure2.ProcedureExecutor(1029): Stored pid=14, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=np1 2023-07-24 23:10:51,349 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35803] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-24 23:10:51,356 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 23:10:51,362 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=14, state=SUCCESS; CreateNamespaceProcedure, namespace=np1 in 15 msec 2023-07-24 23:10:51,418 INFO [jenkins-hbase4:35803] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 23:10:51,419 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=ae8cb358246724b2add803ddae0b9cb8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39913,1690240249840 2023-07-24 23:10:51,419 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1690240251211.ae8cb358246724b2add803ddae0b9cb8.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1690240251419"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240251419"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240251419"}]},"ts":"1690240251419"} 2023-07-24 23:10:51,421 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=13, state=RUNNABLE; OpenRegionProcedure ae8cb358246724b2add803ddae0b9cb8, server=jenkins-hbase4.apache.org,39913,1690240249840}] 2023-07-24 23:10:51,450 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35803] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-24 23:10:51,455 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35803] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 23:10:51,457 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35803] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table1 2023-07-24 23:10:51,459 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 23:10:51,460 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35803] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table1" procId is: 16 2023-07-24 23:10:51,460 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35803] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-24 23:10:51,462 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:51,462 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-24 23:10:51,464 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 23:10:51,466 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/.tmp/data/np1/table1/a62c3f29ed4127ec4fbf509a46ada0bf 2023-07-24 23:10:51,466 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/.tmp/data/np1/table1/a62c3f29ed4127ec4fbf509a46ada0bf empty. 2023-07-24 23:10:51,467 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/.tmp/data/np1/table1/a62c3f29ed4127ec4fbf509a46ada0bf 2023-07-24 23:10:51,467 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-24 23:10:51,496 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/.tmp/data/np1/table1/.tabledesc/.tableinfo.0000000001 2023-07-24 23:10:51,499 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(7675): creating {ENCODED => a62c3f29ed4127ec4fbf509a46ada0bf, NAME => 'np1:table1,,1690240251455.a62c3f29ed4127ec4fbf509a46ada0bf.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/.tmp 2023-07-24 23:10:51,529 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(866): Instantiated np1:table1,,1690240251455.a62c3f29ed4127ec4fbf509a46ada0bf.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:51,529 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1604): Closing a62c3f29ed4127ec4fbf509a46ada0bf, disabling compactions & flushes 2023-07-24 23:10:51,529 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1626): Closing region np1:table1,,1690240251455.a62c3f29ed4127ec4fbf509a46ada0bf. 2023-07-24 23:10:51,529 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1690240251455.a62c3f29ed4127ec4fbf509a46ada0bf. 2023-07-24 23:10:51,529 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1690240251455.a62c3f29ed4127ec4fbf509a46ada0bf. after waiting 0 ms 2023-07-24 23:10:51,529 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1690240251455.a62c3f29ed4127ec4fbf509a46ada0bf. 2023-07-24 23:10:51,530 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1838): Closed np1:table1,,1690240251455.a62c3f29ed4127ec4fbf509a46ada0bf. 2023-07-24 23:10:51,530 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1558): Region close journal for a62c3f29ed4127ec4fbf509a46ada0bf: 2023-07-24 23:10:51,532 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 23:10:51,534 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"np1:table1,,1690240251455.a62c3f29ed4127ec4fbf509a46ada0bf.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690240251533"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240251533"}]},"ts":"1690240251533"} 2023-07-24 23:10:51,536 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 23:10:51,536 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 23:10:51,537 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690240251536"}]},"ts":"1690240251536"} 2023-07-24 23:10:51,538 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLING in hbase:meta 2023-07-24 23:10:51,542 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 23:10:51,542 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 23:10:51,542 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 23:10:51,542 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 23:10:51,542 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 23:10:51,543 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=a62c3f29ed4127ec4fbf509a46ada0bf, ASSIGN}] 2023-07-24 23:10:51,543 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=a62c3f29ed4127ec4fbf509a46ada0bf, ASSIGN 2023-07-24 23:10:51,544 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=np1:table1, region=a62c3f29ed4127ec4fbf509a46ada0bf, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39913,1690240249840; forceNewPlan=false, retain=false 2023-07-24 23:10:51,562 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35803] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-24 23:10:51,577 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1690240251211.ae8cb358246724b2add803ddae0b9cb8. 2023-07-24 23:10:51,577 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ae8cb358246724b2add803ddae0b9cb8, NAME => 'hbase:quota,,1690240251211.ae8cb358246724b2add803ddae0b9cb8.', STARTKEY => '', ENDKEY => ''} 2023-07-24 23:10:51,577 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota ae8cb358246724b2add803ddae0b9cb8 2023-07-24 23:10:51,577 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1690240251211.ae8cb358246724b2add803ddae0b9cb8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:51,578 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ae8cb358246724b2add803ddae0b9cb8 2023-07-24 23:10:51,578 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ae8cb358246724b2add803ddae0b9cb8 2023-07-24 23:10:51,579 INFO [StoreOpener-ae8cb358246724b2add803ddae0b9cb8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region ae8cb358246724b2add803ddae0b9cb8 2023-07-24 23:10:51,581 DEBUG [StoreOpener-ae8cb358246724b2add803ddae0b9cb8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/quota/ae8cb358246724b2add803ddae0b9cb8/q 2023-07-24 23:10:51,581 DEBUG [StoreOpener-ae8cb358246724b2add803ddae0b9cb8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/quota/ae8cb358246724b2add803ddae0b9cb8/q 2023-07-24 23:10:51,581 INFO [StoreOpener-ae8cb358246724b2add803ddae0b9cb8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ae8cb358246724b2add803ddae0b9cb8 columnFamilyName q 2023-07-24 23:10:51,582 INFO [StoreOpener-ae8cb358246724b2add803ddae0b9cb8-1] regionserver.HStore(310): Store=ae8cb358246724b2add803ddae0b9cb8/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:51,582 INFO [StoreOpener-ae8cb358246724b2add803ddae0b9cb8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region ae8cb358246724b2add803ddae0b9cb8 2023-07-24 23:10:51,583 DEBUG [StoreOpener-ae8cb358246724b2add803ddae0b9cb8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/quota/ae8cb358246724b2add803ddae0b9cb8/u 2023-07-24 23:10:51,583 DEBUG [StoreOpener-ae8cb358246724b2add803ddae0b9cb8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/quota/ae8cb358246724b2add803ddae0b9cb8/u 2023-07-24 23:10:51,584 INFO [StoreOpener-ae8cb358246724b2add803ddae0b9cb8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ae8cb358246724b2add803ddae0b9cb8 columnFamilyName u 2023-07-24 23:10:51,584 INFO [StoreOpener-ae8cb358246724b2add803ddae0b9cb8-1] regionserver.HStore(310): Store=ae8cb358246724b2add803ddae0b9cb8/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:51,585 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/quota/ae8cb358246724b2add803ddae0b9cb8 2023-07-24 23:10:51,586 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/quota/ae8cb358246724b2add803ddae0b9cb8 2023-07-24 23:10:51,588 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-24 23:10:51,589 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ae8cb358246724b2add803ddae0b9cb8 2023-07-24 23:10:51,591 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/quota/ae8cb358246724b2add803ddae0b9cb8/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 23:10:51,592 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ae8cb358246724b2add803ddae0b9cb8; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11099432640, jitterRate=0.03371521830558777}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-24 23:10:51,592 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ae8cb358246724b2add803ddae0b9cb8: 2023-07-24 23:10:51,593 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1690240251211.ae8cb358246724b2add803ddae0b9cb8., pid=15, masterSystemTime=1690240251573 2023-07-24 23:10:51,594 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1690240251211.ae8cb358246724b2add803ddae0b9cb8. 2023-07-24 23:10:51,595 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1690240251211.ae8cb358246724b2add803ddae0b9cb8. 2023-07-24 23:10:51,595 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=ae8cb358246724b2add803ddae0b9cb8, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39913,1690240249840 2023-07-24 23:10:51,596 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1690240251211.ae8cb358246724b2add803ddae0b9cb8.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1690240251595"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690240251595"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690240251595"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690240251595"}]},"ts":"1690240251595"} 2023-07-24 23:10:51,599 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=15, resume processing ppid=13 2023-07-24 23:10:51,599 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=13, state=SUCCESS; OpenRegionProcedure ae8cb358246724b2add803ddae0b9cb8, server=jenkins-hbase4.apache.org,39913,1690240249840 in 176 msec 2023-07-24 23:10:51,601 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-24 23:10:51,601 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=ae8cb358246724b2add803ddae0b9cb8, ASSIGN in 333 msec 2023-07-24 23:10:51,601 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 23:10:51,602 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690240251602"}]},"ts":"1690240251602"} 2023-07-24 23:10:51,603 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-24 23:10:51,605 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 23:10:51,607 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=hbase:quota in 394 msec 2023-07-24 23:10:51,694 INFO [jenkins-hbase4:35803] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 23:10:51,696 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=a62c3f29ed4127ec4fbf509a46ada0bf, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39913,1690240249840 2023-07-24 23:10:51,696 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1690240251455.a62c3f29ed4127ec4fbf509a46ada0bf.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690240251696"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240251696"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240251696"}]},"ts":"1690240251696"} 2023-07-24 23:10:51,698 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; OpenRegionProcedure a62c3f29ed4127ec4fbf509a46ada0bf, server=jenkins-hbase4.apache.org,39913,1690240249840}] 2023-07-24 23:10:51,763 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35803] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-24 23:10:51,853 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open np1:table1,,1690240251455.a62c3f29ed4127ec4fbf509a46ada0bf. 2023-07-24 23:10:51,853 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a62c3f29ed4127ec4fbf509a46ada0bf, NAME => 'np1:table1,,1690240251455.a62c3f29ed4127ec4fbf509a46ada0bf.', STARTKEY => '', ENDKEY => ''} 2023-07-24 23:10:51,853 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table table1 a62c3f29ed4127ec4fbf509a46ada0bf 2023-07-24 23:10:51,853 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated np1:table1,,1690240251455.a62c3f29ed4127ec4fbf509a46ada0bf.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:51,853 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a62c3f29ed4127ec4fbf509a46ada0bf 2023-07-24 23:10:51,854 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a62c3f29ed4127ec4fbf509a46ada0bf 2023-07-24 23:10:51,855 INFO [StoreOpener-a62c3f29ed4127ec4fbf509a46ada0bf-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family fam1 of region a62c3f29ed4127ec4fbf509a46ada0bf 2023-07-24 23:10:51,856 DEBUG [StoreOpener-a62c3f29ed4127ec4fbf509a46ada0bf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/np1/table1/a62c3f29ed4127ec4fbf509a46ada0bf/fam1 2023-07-24 23:10:51,856 DEBUG [StoreOpener-a62c3f29ed4127ec4fbf509a46ada0bf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/np1/table1/a62c3f29ed4127ec4fbf509a46ada0bf/fam1 2023-07-24 23:10:51,857 INFO [StoreOpener-a62c3f29ed4127ec4fbf509a46ada0bf-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a62c3f29ed4127ec4fbf509a46ada0bf columnFamilyName fam1 2023-07-24 23:10:51,857 INFO [StoreOpener-a62c3f29ed4127ec4fbf509a46ada0bf-1] regionserver.HStore(310): Store=a62c3f29ed4127ec4fbf509a46ada0bf/fam1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:51,858 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/np1/table1/a62c3f29ed4127ec4fbf509a46ada0bf 2023-07-24 23:10:51,859 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/np1/table1/a62c3f29ed4127ec4fbf509a46ada0bf 2023-07-24 23:10:51,861 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a62c3f29ed4127ec4fbf509a46ada0bf 2023-07-24 23:10:51,863 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/np1/table1/a62c3f29ed4127ec4fbf509a46ada0bf/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 23:10:51,863 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a62c3f29ed4127ec4fbf509a46ada0bf; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10882216960, jitterRate=0.013485431671142578}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 23:10:51,863 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a62c3f29ed4127ec4fbf509a46ada0bf: 2023-07-24 23:10:51,864 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for np1:table1,,1690240251455.a62c3f29ed4127ec4fbf509a46ada0bf., pid=18, masterSystemTime=1690240251849 2023-07-24 23:10:51,866 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for np1:table1,,1690240251455.a62c3f29ed4127ec4fbf509a46ada0bf. 2023-07-24 23:10:51,866 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened np1:table1,,1690240251455.a62c3f29ed4127ec4fbf509a46ada0bf. 2023-07-24 23:10:51,866 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=a62c3f29ed4127ec4fbf509a46ada0bf, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39913,1690240249840 2023-07-24 23:10:51,866 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"np1:table1,,1690240251455.a62c3f29ed4127ec4fbf509a46ada0bf.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690240251866"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690240251866"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690240251866"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690240251866"}]},"ts":"1690240251866"} 2023-07-24 23:10:51,869 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-24 23:10:51,870 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; OpenRegionProcedure a62c3f29ed4127ec4fbf509a46ada0bf, server=jenkins-hbase4.apache.org,39913,1690240249840 in 171 msec 2023-07-24 23:10:51,871 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-24 23:10:51,872 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=a62c3f29ed4127ec4fbf509a46ada0bf, ASSIGN in 328 msec 2023-07-24 23:10:51,872 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 23:10:51,872 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690240251872"}]},"ts":"1690240251872"} 2023-07-24 23:10:51,873 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLED in hbase:meta 2023-07-24 23:10:51,876 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 23:10:51,877 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; CreateTableProcedure table=np1:table1 in 421 msec 2023-07-24 23:10:52,064 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35803] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-24 23:10:52,064 INFO [Listener at localhost/36721] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: np1:table1, procId: 16 completed 2023-07-24 23:10:52,065 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35803] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table2', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 23:10:52,066 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35803] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table2 2023-07-24 23:10:52,068 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table2 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 23:10:52,068 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35803] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table2" procId is: 19 2023-07-24 23:10:52,069 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35803] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-24 23:10:52,088 INFO [PEWorker-4] procedure2.ProcedureExecutor(1528): Rolled back pid=19, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.quotas.QuotaExceededException via master-create-table:org.apache.hadoop.hbase.quotas.QuotaExceededException: The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace.; CreateTableProcedure table=np1:table2 exec-time=22 msec 2023-07-24 23:10:52,170 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35803] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-24 23:10:52,173 INFO [Listener at localhost/36721] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: np1:table2, procId: 19 failed with The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace. 2023-07-24 23:10:52,174 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35803] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:52,174 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35803] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:52,175 INFO [Listener at localhost/36721] client.HBaseAdmin$15(890): Started disable of np1:table1 2023-07-24 23:10:52,176 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35803] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable np1:table1 2023-07-24 23:10:52,177 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35803] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=np1:table1 2023-07-24 23:10:52,179 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35803] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-24 23:10:52,179 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690240252179"}]},"ts":"1690240252179"} 2023-07-24 23:10:52,180 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLING in hbase:meta 2023-07-24 23:10:52,182 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set np1:table1 to state=DISABLING 2023-07-24 23:10:52,182 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=a62c3f29ed4127ec4fbf509a46ada0bf, UNASSIGN}] 2023-07-24 23:10:52,183 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=a62c3f29ed4127ec4fbf509a46ada0bf, UNASSIGN 2023-07-24 23:10:52,184 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=a62c3f29ed4127ec4fbf509a46ada0bf, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39913,1690240249840 2023-07-24 23:10:52,184 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1690240251455.a62c3f29ed4127ec4fbf509a46ada0bf.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690240252184"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240252184"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240252184"}]},"ts":"1690240252184"} 2023-07-24 23:10:52,185 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; CloseRegionProcedure a62c3f29ed4127ec4fbf509a46ada0bf, server=jenkins-hbase4.apache.org,39913,1690240249840}] 2023-07-24 23:10:52,280 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35803] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-24 23:10:52,337 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close a62c3f29ed4127ec4fbf509a46ada0bf 2023-07-24 23:10:52,338 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a62c3f29ed4127ec4fbf509a46ada0bf, disabling compactions & flushes 2023-07-24 23:10:52,338 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region np1:table1,,1690240251455.a62c3f29ed4127ec4fbf509a46ada0bf. 2023-07-24 23:10:52,338 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1690240251455.a62c3f29ed4127ec4fbf509a46ada0bf. 2023-07-24 23:10:52,338 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1690240251455.a62c3f29ed4127ec4fbf509a46ada0bf. after waiting 0 ms 2023-07-24 23:10:52,338 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1690240251455.a62c3f29ed4127ec4fbf509a46ada0bf. 2023-07-24 23:10:52,342 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/np1/table1/a62c3f29ed4127ec4fbf509a46ada0bf/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 23:10:52,343 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed np1:table1,,1690240251455.a62c3f29ed4127ec4fbf509a46ada0bf. 2023-07-24 23:10:52,343 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a62c3f29ed4127ec4fbf509a46ada0bf: 2023-07-24 23:10:52,344 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed a62c3f29ed4127ec4fbf509a46ada0bf 2023-07-24 23:10:52,344 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=a62c3f29ed4127ec4fbf509a46ada0bf, regionState=CLOSED 2023-07-24 23:10:52,344 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"np1:table1,,1690240251455.a62c3f29ed4127ec4fbf509a46ada0bf.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690240252344"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240252344"}]},"ts":"1690240252344"} 2023-07-24 23:10:52,347 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-24 23:10:52,347 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; CloseRegionProcedure a62c3f29ed4127ec4fbf509a46ada0bf, server=jenkins-hbase4.apache.org,39913,1690240249840 in 161 msec 2023-07-24 23:10:52,348 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-24 23:10:52,348 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=a62c3f29ed4127ec4fbf509a46ada0bf, UNASSIGN in 165 msec 2023-07-24 23:10:52,349 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690240252349"}]},"ts":"1690240252349"} 2023-07-24 23:10:52,350 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLED in hbase:meta 2023-07-24 23:10:52,356 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set np1:table1 to state=DISABLED 2023-07-24 23:10:52,358 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; DisableTableProcedure table=np1:table1 in 181 msec 2023-07-24 23:10:52,489 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35803] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-24 23:10:52,490 INFO [Listener at localhost/36721] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: np1:table1, procId: 20 completed 2023-07-24 23:10:52,491 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35803] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete np1:table1 2023-07-24 23:10:52,492 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35803] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=np1:table1 2023-07-24 23:10:52,495 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-24 23:10:52,495 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35803] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'np1:table1' from rsgroup 'default' 2023-07-24 23:10:52,496 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=23, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=np1:table1 2023-07-24 23:10:52,498 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35803] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:52,498 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35803] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-24 23:10:52,503 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35803] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-24 23:10:52,504 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/.tmp/data/np1/table1/a62c3f29ed4127ec4fbf509a46ada0bf 2023-07-24 23:10:52,506 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/.tmp/data/np1/table1/a62c3f29ed4127ec4fbf509a46ada0bf/fam1, FileablePath, hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/.tmp/data/np1/table1/a62c3f29ed4127ec4fbf509a46ada0bf/recovered.edits] 2023-07-24 23:10:52,513 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/.tmp/data/np1/table1/a62c3f29ed4127ec4fbf509a46ada0bf/recovered.edits/4.seqid to hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/archive/data/np1/table1/a62c3f29ed4127ec4fbf509a46ada0bf/recovered.edits/4.seqid 2023-07-24 23:10:52,513 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/.tmp/data/np1/table1/a62c3f29ed4127ec4fbf509a46ada0bf 2023-07-24 23:10:52,513 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-24 23:10:52,516 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=23, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=np1:table1 2023-07-24 23:10:52,517 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of np1:table1 from hbase:meta 2023-07-24 23:10:52,519 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'np1:table1' descriptor. 2023-07-24 23:10:52,521 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=23, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=np1:table1 2023-07-24 23:10:52,521 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'np1:table1' from region states. 2023-07-24 23:10:52,521 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1,,1690240251455.a62c3f29ed4127ec4fbf509a46ada0bf.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690240252521"}]},"ts":"9223372036854775807"} 2023-07-24 23:10:52,522 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-24 23:10:52,522 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => a62c3f29ed4127ec4fbf509a46ada0bf, NAME => 'np1:table1,,1690240251455.a62c3f29ed4127ec4fbf509a46ada0bf.', STARTKEY => '', ENDKEY => ''}] 2023-07-24 23:10:52,522 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'np1:table1' as deleted. 2023-07-24 23:10:52,523 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690240252523"}]},"ts":"9223372036854775807"} 2023-07-24 23:10:52,524 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table np1:table1 state from META 2023-07-24 23:10:52,530 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=23, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-24 23:10:52,531 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DeleteTableProcedure table=np1:table1 in 39 msec 2023-07-24 23:10:52,604 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35803] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-24 23:10:52,604 INFO [Listener at localhost/36721] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: np1:table1, procId: 23 completed 2023-07-24 23:10:52,611 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35803] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete np1 2023-07-24 23:10:52,617 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35803] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=np1 2023-07-24 23:10:52,620 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-24 23:10:52,622 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-24 23:10:52,625 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-24 23:10:52,625 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35803] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-24 23:10:52,626 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/np1 2023-07-24 23:10:52,626 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 23:10:52,626 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-24 23:10:52,628 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-24 23:10:52,629 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; DeleteNamespaceProcedure, namespace=np1 in 17 msec 2023-07-24 23:10:52,726 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35803] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-24 23:10:52,727 INFO [Listener at localhost/36721] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-24 23:10:52,727 INFO [Listener at localhost/36721] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-24 23:10:52,727 DEBUG [Listener at localhost/36721] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3d72335e to 127.0.0.1:56120 2023-07-24 23:10:52,727 DEBUG [Listener at localhost/36721] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 23:10:52,727 DEBUG [Listener at localhost/36721] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-24 23:10:52,727 DEBUG [Listener at localhost/36721] util.JVMClusterUtil(257): Found active master hash=1250994107, stopped=false 2023-07-24 23:10:52,727 DEBUG [Listener at localhost/36721] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-24 23:10:52,727 DEBUG [Listener at localhost/36721] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-24 23:10:52,728 DEBUG [Listener at localhost/36721] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-24 23:10:52,728 INFO [Listener at localhost/36721] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,35803,1690240249288 2023-07-24 23:10:52,730 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 23:10:52,730 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): regionserver:45635-0x1019999f0a60001, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 23:10:52,730 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 23:10:52,730 INFO [Listener at localhost/36721] procedure2.ProcedureExecutor(629): Stopping 2023-07-24 23:10:52,730 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): regionserver:39913-0x1019999f0a60003, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 23:10:52,730 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): regionserver:44481-0x1019999f0a60002, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 23:10:52,731 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 23:10:52,732 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39913-0x1019999f0a60003, quorum=127.0.0.1:56120, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 23:10:52,732 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:45635-0x1019999f0a60001, quorum=127.0.0.1:56120, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 23:10:52,732 INFO [RS:0;jenkins-hbase4:45635] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,45635,1690240249478' ***** 2023-07-24 23:10:52,732 DEBUG [Listener at localhost/36721] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0de4f153 to 127.0.0.1:56120 2023-07-24 23:10:52,732 INFO [RS:0;jenkins-hbase4:45635] regionserver.HRegionServer(2311): STOPPED: Exiting; cluster shutdown set and not carrying any regions 2023-07-24 23:10:52,732 DEBUG [Listener at localhost/36721] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 23:10:52,732 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44481-0x1019999f0a60002, quorum=127.0.0.1:56120, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 23:10:52,732 INFO [Listener at localhost/36721] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,44481,1690240249660' ***** 2023-07-24 23:10:52,732 INFO [Listener at localhost/36721] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 23:10:52,733 INFO [Listener at localhost/36721] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,39913,1690240249840' ***** 2023-07-24 23:10:52,733 INFO [RS:1;jenkins-hbase4:44481] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 23:10:52,733 INFO [Listener at localhost/36721] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 23:10:52,733 INFO [RS:2;jenkins-hbase4:39913] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 23:10:52,734 INFO [RS:0;jenkins-hbase4:45635] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 23:10:52,744 INFO [RS:2;jenkins-hbase4:39913] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@5e2b38d{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 23:10:52,744 INFO [RS:1;jenkins-hbase4:44481] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@5b3993d{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 23:10:52,744 INFO [RS:0;jenkins-hbase4:45635] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2b5fd2c9{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 23:10:52,744 INFO [RS:1;jenkins-hbase4:44481] server.AbstractConnector(383): Stopped ServerConnector@45dcad8a{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 23:10:52,744 INFO [RS:2;jenkins-hbase4:39913] server.AbstractConnector(383): Stopped ServerConnector@50dbd304{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 23:10:52,744 INFO [RS:0;jenkins-hbase4:45635] server.AbstractConnector(383): Stopped ServerConnector@82a3fa8{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 23:10:52,744 INFO [RS:1;jenkins-hbase4:44481] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 23:10:52,745 INFO [RS:0;jenkins-hbase4:45635] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 23:10:52,744 INFO [RS:2;jenkins-hbase4:39913] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 23:10:52,745 INFO [RS:1;jenkins-hbase4:44481] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@40e93b12{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 23:10:52,747 INFO [RS:0;jenkins-hbase4:45635] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@f6f98fa{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 23:10:52,747 INFO [RS:1;jenkins-hbase4:44481] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7843c978{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39b251-506e-547e-6571-b32ebac4f970/hadoop.log.dir/,STOPPED} 2023-07-24 23:10:52,747 INFO [RS:2;jenkins-hbase4:39913] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5fd1460f{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 23:10:52,747 INFO [RS:0;jenkins-hbase4:45635] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6a10df09{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39b251-506e-547e-6571-b32ebac4f970/hadoop.log.dir/,STOPPED} 2023-07-24 23:10:52,748 INFO [RS:2;jenkins-hbase4:39913] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@234a8caa{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39b251-506e-547e-6571-b32ebac4f970/hadoop.log.dir/,STOPPED} 2023-07-24 23:10:52,748 INFO [RS:1;jenkins-hbase4:44481] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 23:10:52,748 INFO [RS:0;jenkins-hbase4:45635] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 23:10:52,748 INFO [RS:1;jenkins-hbase4:44481] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 23:10:52,749 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 23:10:52,749 INFO [RS:2;jenkins-hbase4:39913] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 23:10:52,749 INFO [RS:0;jenkins-hbase4:45635] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 23:10:52,749 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 23:10:52,750 INFO [RS:0;jenkins-hbase4:45635] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 23:10:52,750 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 23:10:52,750 INFO [RS:0;jenkins-hbase4:45635] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,45635,1690240249478 2023-07-24 23:10:52,750 INFO [RS:2;jenkins-hbase4:39913] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 23:10:52,749 INFO [RS:1;jenkins-hbase4:44481] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 23:10:52,751 INFO [RS:2;jenkins-hbase4:39913] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 23:10:52,751 INFO [RS:1;jenkins-hbase4:44481] regionserver.HRegionServer(3305): Received CLOSE for d0bb6bb01fe8e0a6afcf674e8c4a5e14 2023-07-24 23:10:52,751 DEBUG [RS:0;jenkins-hbase4:45635] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7a4dd8c4 to 127.0.0.1:56120 2023-07-24 23:10:52,751 INFO [RS:2;jenkins-hbase4:39913] regionserver.HRegionServer(3305): Received CLOSE for ae8cb358246724b2add803ddae0b9cb8 2023-07-24 23:10:52,751 DEBUG [RS:0;jenkins-hbase4:45635] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 23:10:52,751 INFO [RS:2;jenkins-hbase4:39913] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,39913,1690240249840 2023-07-24 23:10:52,757 INFO [RS:0;jenkins-hbase4:45635] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,45635,1690240249478; all regions closed. 2023-07-24 23:10:52,757 DEBUG [RS:0;jenkins-hbase4:45635] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-24 23:10:52,757 INFO [RS:1;jenkins-hbase4:44481] regionserver.HRegionServer(3305): Received CLOSE for 0507a7bbe7478baa73a6f4cefc1b3015 2023-07-24 23:10:52,757 DEBUG [RS:2;jenkins-hbase4:39913] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x00baf12b to 127.0.0.1:56120 2023-07-24 23:10:52,757 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ae8cb358246724b2add803ddae0b9cb8, disabling compactions & flushes 2023-07-24 23:10:52,758 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d0bb6bb01fe8e0a6afcf674e8c4a5e14, disabling compactions & flushes 2023-07-24 23:10:52,757 DEBUG [RS:2;jenkins-hbase4:39913] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 23:10:52,758 INFO [RS:2;jenkins-hbase4:39913] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 23:10:52,758 INFO [RS:2;jenkins-hbase4:39913] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 23:10:52,758 INFO [RS:2;jenkins-hbase4:39913] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 23:10:52,757 INFO [RS:1;jenkins-hbase4:44481] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44481,1690240249660 2023-07-24 23:10:52,758 INFO [RS:2;jenkins-hbase4:39913] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-24 23:10:52,758 DEBUG [RS:1;jenkins-hbase4:44481] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6070f617 to 127.0.0.1:56120 2023-07-24 23:10:52,758 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690240250853.d0bb6bb01fe8e0a6afcf674e8c4a5e14. 2023-07-24 23:10:52,758 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1690240251211.ae8cb358246724b2add803ddae0b9cb8. 2023-07-24 23:10:52,759 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690240250853.d0bb6bb01fe8e0a6afcf674e8c4a5e14. 2023-07-24 23:10:52,758 DEBUG [RS:1;jenkins-hbase4:44481] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 23:10:52,759 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690240250853.d0bb6bb01fe8e0a6afcf674e8c4a5e14. after waiting 0 ms 2023-07-24 23:10:52,759 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690240250853.d0bb6bb01fe8e0a6afcf674e8c4a5e14. 2023-07-24 23:10:52,759 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing d0bb6bb01fe8e0a6afcf674e8c4a5e14 1/1 column families, dataSize=585 B heapSize=1.04 KB 2023-07-24 23:10:52,759 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1690240251211.ae8cb358246724b2add803ddae0b9cb8. 2023-07-24 23:10:52,759 INFO [RS:1;jenkins-hbase4:44481] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-24 23:10:52,759 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1690240251211.ae8cb358246724b2add803ddae0b9cb8. after waiting 0 ms 2023-07-24 23:10:52,759 DEBUG [RS:1;jenkins-hbase4:44481] regionserver.HRegionServer(1478): Online Regions={d0bb6bb01fe8e0a6afcf674e8c4a5e14=hbase:rsgroup,,1690240250853.d0bb6bb01fe8e0a6afcf674e8c4a5e14., 0507a7bbe7478baa73a6f4cefc1b3015=hbase:namespace,,1690240250745.0507a7bbe7478baa73a6f4cefc1b3015.} 2023-07-24 23:10:52,759 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1690240251211.ae8cb358246724b2add803ddae0b9cb8. 2023-07-24 23:10:52,759 DEBUG [RS:1;jenkins-hbase4:44481] regionserver.HRegionServer(1504): Waiting on 0507a7bbe7478baa73a6f4cefc1b3015, d0bb6bb01fe8e0a6afcf674e8c4a5e14 2023-07-24 23:10:52,759 INFO [RS:2;jenkins-hbase4:39913] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-24 23:10:52,759 DEBUG [RS:2;jenkins-hbase4:39913] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, ae8cb358246724b2add803ddae0b9cb8=hbase:quota,,1690240251211.ae8cb358246724b2add803ddae0b9cb8.} 2023-07-24 23:10:52,759 DEBUG [RS:2;jenkins-hbase4:39913] regionserver.HRegionServer(1504): Waiting on 1588230740, ae8cb358246724b2add803ddae0b9cb8 2023-07-24 23:10:52,761 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-24 23:10:52,761 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-24 23:10:52,761 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-24 23:10:52,761 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-24 23:10:52,762 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-24 23:10:52,762 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=5.89 KB heapSize=11.09 KB 2023-07-24 23:10:52,768 DEBUG [RS:0;jenkins-hbase4:45635] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/oldWALs 2023-07-24 23:10:52,768 INFO [RS:0;jenkins-hbase4:45635] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C45635%2C1690240249478:(num 1690240250557) 2023-07-24 23:10:52,769 DEBUG [RS:0;jenkins-hbase4:45635] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 23:10:52,769 INFO [RS:0;jenkins-hbase4:45635] regionserver.LeaseManager(133): Closed leases 2023-07-24 23:10:52,769 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/quota/ae8cb358246724b2add803ddae0b9cb8/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 23:10:52,769 INFO [RS:0;jenkins-hbase4:45635] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-24 23:10:52,770 INFO [RS:0;jenkins-hbase4:45635] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 23:10:52,770 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 23:10:52,770 INFO [RS:0;jenkins-hbase4:45635] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 23:10:52,770 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1690240251211.ae8cb358246724b2add803ddae0b9cb8. 2023-07-24 23:10:52,770 INFO [RS:0;jenkins-hbase4:45635] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 23:10:52,770 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ae8cb358246724b2add803ddae0b9cb8: 2023-07-24 23:10:52,770 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1690240251211.ae8cb358246724b2add803ddae0b9cb8. 2023-07-24 23:10:52,771 INFO [RS:0;jenkins-hbase4:45635] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:45635 2023-07-24 23:10:52,791 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.26 KB at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/meta/1588230740/.tmp/info/0ddcac806e614240ae39e91a598e7b0c 2023-07-24 23:10:52,795 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=585 B at sequenceid=7 (bloomFilter=true), to=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/rsgroup/d0bb6bb01fe8e0a6afcf674e8c4a5e14/.tmp/m/10cee11a35aa408095c5bb6172d2e75a 2023-07-24 23:10:52,800 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0ddcac806e614240ae39e91a598e7b0c 2023-07-24 23:10:52,802 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/rsgroup/d0bb6bb01fe8e0a6afcf674e8c4a5e14/.tmp/m/10cee11a35aa408095c5bb6172d2e75a as hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/rsgroup/d0bb6bb01fe8e0a6afcf674e8c4a5e14/m/10cee11a35aa408095c5bb6172d2e75a 2023-07-24 23:10:52,808 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/rsgroup/d0bb6bb01fe8e0a6afcf674e8c4a5e14/m/10cee11a35aa408095c5bb6172d2e75a, entries=1, sequenceid=7, filesize=4.9 K 2023-07-24 23:10:52,810 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~585 B/585, heapSize ~1.02 KB/1048, currentSize=0 B/0 for d0bb6bb01fe8e0a6afcf674e8c4a5e14 in 51ms, sequenceid=7, compaction requested=false 2023-07-24 23:10:52,811 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-24 23:10:52,811 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 23:10:52,817 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 23:10:52,817 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 23:10:52,824 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=90 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/meta/1588230740/.tmp/rep_barrier/e49e46055fd5425ca02ef5514d9d0d4a 2023-07-24 23:10:52,824 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/rsgroup/d0bb6bb01fe8e0a6afcf674e8c4a5e14/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=1 2023-07-24 23:10:52,825 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 23:10:52,825 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690240250853.d0bb6bb01fe8e0a6afcf674e8c4a5e14. 2023-07-24 23:10:52,825 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d0bb6bb01fe8e0a6afcf674e8c4a5e14: 2023-07-24 23:10:52,825 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1690240250853.d0bb6bb01fe8e0a6afcf674e8c4a5e14. 2023-07-24 23:10:52,825 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 0507a7bbe7478baa73a6f4cefc1b3015, disabling compactions & flushes 2023-07-24 23:10:52,825 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690240250745.0507a7bbe7478baa73a6f4cefc1b3015. 2023-07-24 23:10:52,825 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690240250745.0507a7bbe7478baa73a6f4cefc1b3015. 2023-07-24 23:10:52,825 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690240250745.0507a7bbe7478baa73a6f4cefc1b3015. after waiting 0 ms 2023-07-24 23:10:52,825 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690240250745.0507a7bbe7478baa73a6f4cefc1b3015. 2023-07-24 23:10:52,825 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 0507a7bbe7478baa73a6f4cefc1b3015 1/1 column families, dataSize=215 B heapSize=776 B 2023-07-24 23:10:52,831 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e49e46055fd5425ca02ef5514d9d0d4a 2023-07-24 23:10:52,840 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=215 B at sequenceid=8 (bloomFilter=true), to=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/namespace/0507a7bbe7478baa73a6f4cefc1b3015/.tmp/info/4efbf789d3a548078dad2324a7a2e291 2023-07-24 23:10:52,845 WARN [DataStreamer for file /user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/meta/1588230740/.tmp/table/ad57f18b2ca9475a8e5c53bbfac1fe4c] hdfs.DataStreamer(982): Caught exception java.lang.InterruptedException at java.lang.Object.wait(Native Method) at java.lang.Thread.join(Thread.java:1257) at java.lang.Thread.join(Thread.java:1331) at org.apache.hadoop.hdfs.DataStreamer.closeResponder(DataStreamer.java:980) at org.apache.hadoop.hdfs.DataStreamer.endBlock(DataStreamer.java:630) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:807) 2023-07-24 23:10:52,845 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=562 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/meta/1588230740/.tmp/table/ad57f18b2ca9475a8e5c53bbfac1fe4c 2023-07-24 23:10:52,847 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4efbf789d3a548078dad2324a7a2e291 2023-07-24 23:10:52,848 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/namespace/0507a7bbe7478baa73a6f4cefc1b3015/.tmp/info/4efbf789d3a548078dad2324a7a2e291 as hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/namespace/0507a7bbe7478baa73a6f4cefc1b3015/info/4efbf789d3a548078dad2324a7a2e291 2023-07-24 23:10:52,851 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ad57f18b2ca9475a8e5c53bbfac1fe4c 2023-07-24 23:10:52,852 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/meta/1588230740/.tmp/info/0ddcac806e614240ae39e91a598e7b0c as hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/meta/1588230740/info/0ddcac806e614240ae39e91a598e7b0c 2023-07-24 23:10:52,853 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4efbf789d3a548078dad2324a7a2e291 2023-07-24 23:10:52,854 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/namespace/0507a7bbe7478baa73a6f4cefc1b3015/info/4efbf789d3a548078dad2324a7a2e291, entries=3, sequenceid=8, filesize=5.0 K 2023-07-24 23:10:52,854 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~215 B/215, heapSize ~760 B/760, currentSize=0 B/0 for 0507a7bbe7478baa73a6f4cefc1b3015 in 29ms, sequenceid=8, compaction requested=false 2023-07-24 23:10:52,854 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-24 23:10:52,859 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0ddcac806e614240ae39e91a598e7b0c 2023-07-24 23:10:52,859 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/meta/1588230740/info/0ddcac806e614240ae39e91a598e7b0c, entries=32, sequenceid=31, filesize=8.5 K 2023-07-24 23:10:52,861 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/meta/1588230740/.tmp/rep_barrier/e49e46055fd5425ca02ef5514d9d0d4a as hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/meta/1588230740/rep_barrier/e49e46055fd5425ca02ef5514d9d0d4a 2023-07-24 23:10:52,861 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): regionserver:39913-0x1019999f0a60003, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,45635,1690240249478 2023-07-24 23:10:52,862 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): regionserver:39913-0x1019999f0a60003, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 23:10:52,862 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): regionserver:45635-0x1019999f0a60001, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,45635,1690240249478 2023-07-24 23:10:52,862 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): regionserver:45635-0x1019999f0a60001, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 23:10:52,862 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/namespace/0507a7bbe7478baa73a6f4cefc1b3015/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=1 2023-07-24 23:10:52,862 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): regionserver:44481-0x1019999f0a60002, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,45635,1690240249478 2023-07-24 23:10:52,862 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690240250745.0507a7bbe7478baa73a6f4cefc1b3015. 2023-07-24 23:10:52,863 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): regionserver:44481-0x1019999f0a60002, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 23:10:52,863 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 23:10:52,863 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 0507a7bbe7478baa73a6f4cefc1b3015: 2023-07-24 23:10:52,863 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1690240250745.0507a7bbe7478baa73a6f4cefc1b3015. 2023-07-24 23:10:52,867 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e49e46055fd5425ca02ef5514d9d0d4a 2023-07-24 23:10:52,867 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/meta/1588230740/rep_barrier/e49e46055fd5425ca02ef5514d9d0d4a, entries=1, sequenceid=31, filesize=4.9 K 2023-07-24 23:10:52,868 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/meta/1588230740/.tmp/table/ad57f18b2ca9475a8e5c53bbfac1fe4c as hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/meta/1588230740/table/ad57f18b2ca9475a8e5c53bbfac1fe4c 2023-07-24 23:10:52,873 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ad57f18b2ca9475a8e5c53bbfac1fe4c 2023-07-24 23:10:52,873 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/meta/1588230740/table/ad57f18b2ca9475a8e5c53bbfac1fe4c, entries=8, sequenceid=31, filesize=5.2 K 2023-07-24 23:10:52,874 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.89 KB/6036, heapSize ~11.05 KB/11312, currentSize=0 B/0 for 1588230740 in 112ms, sequenceid=31, compaction requested=false 2023-07-24 23:10:52,874 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-24 23:10:52,882 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/data/hbase/meta/1588230740/recovered.edits/34.seqid, newMaxSeqId=34, maxSeqId=1 2023-07-24 23:10:52,883 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 23:10:52,883 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-24 23:10:52,883 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-24 23:10:52,883 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-24 23:10:52,959 INFO [RS:1;jenkins-hbase4:44481] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44481,1690240249660; all regions closed. 2023-07-24 23:10:52,959 DEBUG [RS:1;jenkins-hbase4:44481] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-24 23:10:52,960 INFO [RS:2;jenkins-hbase4:39913] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,39913,1690240249840; all regions closed. 2023-07-24 23:10:52,960 DEBUG [RS:2;jenkins-hbase4:39913] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-24 23:10:52,962 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,45635,1690240249478] 2023-07-24 23:10:52,963 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,45635,1690240249478; numProcessing=1 2023-07-24 23:10:52,971 DEBUG [RS:2;jenkins-hbase4:39913] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/oldWALs 2023-07-24 23:10:52,971 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,45635,1690240249478 already deleted, retry=false 2023-07-24 23:10:52,971 INFO [RS:2;jenkins-hbase4:39913] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C39913%2C1690240249840.meta:.meta(num 1690240250668) 2023-07-24 23:10:52,971 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,45635,1690240249478 expired; onlineServers=2 2023-07-24 23:10:52,971 DEBUG [RS:1;jenkins-hbase4:44481] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/oldWALs 2023-07-24 23:10:52,971 INFO [RS:1;jenkins-hbase4:44481] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44481%2C1690240249660:(num 1690240250555) 2023-07-24 23:10:52,971 DEBUG [RS:1;jenkins-hbase4:44481] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 23:10:52,971 INFO [RS:1;jenkins-hbase4:44481] regionserver.LeaseManager(133): Closed leases 2023-07-24 23:10:52,972 INFO [RS:1;jenkins-hbase4:44481] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-24 23:10:52,972 INFO [RS:1;jenkins-hbase4:44481] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 23:10:52,972 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 23:10:52,972 INFO [RS:1;jenkins-hbase4:44481] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 23:10:52,972 INFO [RS:1;jenkins-hbase4:44481] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 23:10:52,974 INFO [RS:1;jenkins-hbase4:44481] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44481 2023-07-24 23:10:52,977 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): regionserver:39913-0x1019999f0a60003, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44481,1690240249660 2023-07-24 23:10:52,977 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 23:10:52,977 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): regionserver:44481-0x1019999f0a60002, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44481,1690240249660 2023-07-24 23:10:52,978 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,44481,1690240249660] 2023-07-24 23:10:52,978 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,44481,1690240249660; numProcessing=2 2023-07-24 23:10:52,978 DEBUG [RS:2;jenkins-hbase4:39913] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/oldWALs 2023-07-24 23:10:52,978 INFO [RS:2;jenkins-hbase4:39913] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C39913%2C1690240249840:(num 1690240250557) 2023-07-24 23:10:52,978 DEBUG [RS:2;jenkins-hbase4:39913] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 23:10:52,978 INFO [RS:2;jenkins-hbase4:39913] regionserver.LeaseManager(133): Closed leases 2023-07-24 23:10:52,979 INFO [RS:2;jenkins-hbase4:39913] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-24 23:10:52,979 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 23:10:52,979 INFO [RS:2;jenkins-hbase4:39913] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:39913 2023-07-24 23:10:52,982 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,44481,1690240249660 already deleted, retry=false 2023-07-24 23:10:52,982 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,44481,1690240249660 expired; onlineServers=1 2023-07-24 23:10:52,984 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): regionserver:39913-0x1019999f0a60003, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39913,1690240249840 2023-07-24 23:10:52,984 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 23:10:52,985 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,39913,1690240249840] 2023-07-24 23:10:52,985 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,39913,1690240249840; numProcessing=3 2023-07-24 23:10:52,986 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,39913,1690240249840 already deleted, retry=false 2023-07-24 23:10:52,986 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,39913,1690240249840 expired; onlineServers=0 2023-07-24 23:10:52,986 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,35803,1690240249288' ***** 2023-07-24 23:10:52,986 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-24 23:10:52,987 DEBUG [M:0;jenkins-hbase4:35803] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4eaf2a31, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 23:10:52,987 INFO [M:0;jenkins-hbase4:35803] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 23:10:52,988 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-24 23:10:52,989 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 23:10:52,989 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 23:10:52,989 INFO [M:0;jenkins-hbase4:35803] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3e0cdcc7{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-24 23:10:52,989 INFO [M:0;jenkins-hbase4:35803] server.AbstractConnector(383): Stopped ServerConnector@285e971d{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 23:10:52,989 INFO [M:0;jenkins-hbase4:35803] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 23:10:52,990 INFO [M:0;jenkins-hbase4:35803] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@ed4524b{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 23:10:52,990 INFO [M:0;jenkins-hbase4:35803] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3f8e7458{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39b251-506e-547e-6571-b32ebac4f970/hadoop.log.dir/,STOPPED} 2023-07-24 23:10:52,990 INFO [M:0;jenkins-hbase4:35803] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35803,1690240249288 2023-07-24 23:10:52,990 INFO [M:0;jenkins-hbase4:35803] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35803,1690240249288; all regions closed. 2023-07-24 23:10:52,990 DEBUG [M:0;jenkins-hbase4:35803] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 23:10:52,990 INFO [M:0;jenkins-hbase4:35803] master.HMaster(1491): Stopping master jetty server 2023-07-24 23:10:52,991 INFO [M:0;jenkins-hbase4:35803] server.AbstractConnector(383): Stopped ServerConnector@4f1473b7{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 23:10:52,991 DEBUG [M:0;jenkins-hbase4:35803] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-24 23:10:52,991 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-24 23:10:52,991 DEBUG [M:0;jenkins-hbase4:35803] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-24 23:10:52,991 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690240250257] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690240250257,5,FailOnTimeoutGroup] 2023-07-24 23:10:52,991 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690240250257] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690240250257,5,FailOnTimeoutGroup] 2023-07-24 23:10:52,991 INFO [M:0;jenkins-hbase4:35803] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-24 23:10:52,993 INFO [M:0;jenkins-hbase4:35803] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-24 23:10:52,993 INFO [M:0;jenkins-hbase4:35803] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-24 23:10:52,993 DEBUG [M:0;jenkins-hbase4:35803] master.HMaster(1512): Stopping service threads 2023-07-24 23:10:52,993 INFO [M:0;jenkins-hbase4:35803] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-24 23:10:52,993 ERROR [M:0;jenkins-hbase4:35803] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-24 23:10:52,993 INFO [M:0;jenkins-hbase4:35803] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-24 23:10:52,994 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-24 23:10:52,994 DEBUG [M:0;jenkins-hbase4:35803] zookeeper.ZKUtil(398): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-24 23:10:52,994 WARN [M:0;jenkins-hbase4:35803] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-24 23:10:52,994 INFO [M:0;jenkins-hbase4:35803] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-24 23:10:52,994 INFO [M:0;jenkins-hbase4:35803] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-24 23:10:52,994 DEBUG [M:0;jenkins-hbase4:35803] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-24 23:10:52,994 INFO [M:0;jenkins-hbase4:35803] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 23:10:52,994 DEBUG [M:0;jenkins-hbase4:35803] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 23:10:52,995 DEBUG [M:0;jenkins-hbase4:35803] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-24 23:10:52,995 DEBUG [M:0;jenkins-hbase4:35803] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 23:10:52,995 INFO [M:0;jenkins-hbase4:35803] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=93.00 KB heapSize=109.15 KB 2023-07-24 23:10:53,008 INFO [M:0;jenkins-hbase4:35803] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=93.00 KB at sequenceid=194 (bloomFilter=true), to=hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/e631d12a942240caa191164b040cd8cc 2023-07-24 23:10:53,013 DEBUG [M:0;jenkins-hbase4:35803] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/e631d12a942240caa191164b040cd8cc as hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/e631d12a942240caa191164b040cd8cc 2023-07-24 23:10:53,018 INFO [M:0;jenkins-hbase4:35803] regionserver.HStore(1080): Added hdfs://localhost:36591/user/jenkins/test-data/9c60f5ee-f4c4-93a5-afc0-015721352de1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/e631d12a942240caa191164b040cd8cc, entries=24, sequenceid=194, filesize=12.4 K 2023-07-24 23:10:53,019 INFO [M:0;jenkins-hbase4:35803] regionserver.HRegion(2948): Finished flush of dataSize ~93.00 KB/95237, heapSize ~109.13 KB/111752, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 24ms, sequenceid=194, compaction requested=false 2023-07-24 23:10:53,021 INFO [M:0;jenkins-hbase4:35803] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 23:10:53,021 DEBUG [M:0;jenkins-hbase4:35803] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 23:10:53,028 INFO [M:0;jenkins-hbase4:35803] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-24 23:10:53,028 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 23:10:53,028 INFO [M:0;jenkins-hbase4:35803] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35803 2023-07-24 23:10:53,031 DEBUG [M:0;jenkins-hbase4:35803] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,35803,1690240249288 already deleted, retry=false 2023-07-24 23:10:53,130 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): regionserver:39913-0x1019999f0a60003, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 23:10:53,130 INFO [RS:2;jenkins-hbase4:39913] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,39913,1690240249840; zookeeper connection closed. 2023-07-24 23:10:53,130 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): regionserver:39913-0x1019999f0a60003, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 23:10:53,131 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@64bb75f3] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@64bb75f3 2023-07-24 23:10:53,230 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): regionserver:44481-0x1019999f0a60002, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 23:10:53,230 INFO [RS:1;jenkins-hbase4:44481] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44481,1690240249660; zookeeper connection closed. 2023-07-24 23:10:53,230 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): regionserver:44481-0x1019999f0a60002, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 23:10:53,232 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@356c5479] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@356c5479 2023-07-24 23:10:53,330 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): regionserver:45635-0x1019999f0a60001, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 23:10:53,330 INFO [RS:0;jenkins-hbase4:45635] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,45635,1690240249478; zookeeper connection closed. 2023-07-24 23:10:53,330 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): regionserver:45635-0x1019999f0a60001, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 23:10:53,331 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@346cffec] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@346cffec 2023-07-24 23:10:53,331 INFO [Listener at localhost/36721] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-24 23:10:53,431 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 23:10:53,431 INFO [M:0;jenkins-hbase4:35803] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35803,1690240249288; zookeeper connection closed. 2023-07-24 23:10:53,431 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): master:35803-0x1019999f0a60000, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 23:10:53,432 WARN [Listener at localhost/36721] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 23:10:53,436 INFO [Listener at localhost/36721] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 23:10:53,541 WARN [BP-201857868-172.31.14.131-1690240248469 heartbeating to localhost/127.0.0.1:36591] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 23:10:53,541 WARN [BP-201857868-172.31.14.131-1690240248469 heartbeating to localhost/127.0.0.1:36591] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-201857868-172.31.14.131-1690240248469 (Datanode Uuid 6cd63ec3-8275-4db2-98e7-52b473c1d976) service to localhost/127.0.0.1:36591 2023-07-24 23:10:53,542 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39b251-506e-547e-6571-b32ebac4f970/cluster_f9d48cc7-c0cb-4472-8ae4-bc4e1459bdbf/dfs/data/data5/current/BP-201857868-172.31.14.131-1690240248469] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 23:10:53,542 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39b251-506e-547e-6571-b32ebac4f970/cluster_f9d48cc7-c0cb-4472-8ae4-bc4e1459bdbf/dfs/data/data6/current/BP-201857868-172.31.14.131-1690240248469] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 23:10:53,544 WARN [Listener at localhost/36721] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 23:10:53,547 INFO [Listener at localhost/36721] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 23:10:53,651 WARN [BP-201857868-172.31.14.131-1690240248469 heartbeating to localhost/127.0.0.1:36591] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 23:10:53,652 WARN [BP-201857868-172.31.14.131-1690240248469 heartbeating to localhost/127.0.0.1:36591] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-201857868-172.31.14.131-1690240248469 (Datanode Uuid a44d6937-70b7-4f83-aa4e-bf08859393c6) service to localhost/127.0.0.1:36591 2023-07-24 23:10:53,653 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39b251-506e-547e-6571-b32ebac4f970/cluster_f9d48cc7-c0cb-4472-8ae4-bc4e1459bdbf/dfs/data/data3/current/BP-201857868-172.31.14.131-1690240248469] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 23:10:53,653 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39b251-506e-547e-6571-b32ebac4f970/cluster_f9d48cc7-c0cb-4472-8ae4-bc4e1459bdbf/dfs/data/data4/current/BP-201857868-172.31.14.131-1690240248469] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 23:10:53,654 WARN [Listener at localhost/36721] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 23:10:53,657 INFO [Listener at localhost/36721] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 23:10:53,760 WARN [BP-201857868-172.31.14.131-1690240248469 heartbeating to localhost/127.0.0.1:36591] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 23:10:53,760 WARN [BP-201857868-172.31.14.131-1690240248469 heartbeating to localhost/127.0.0.1:36591] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-201857868-172.31.14.131-1690240248469 (Datanode Uuid 8e9a8d57-1f71-4441-963b-365889bcbd68) service to localhost/127.0.0.1:36591 2023-07-24 23:10:53,761 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39b251-506e-547e-6571-b32ebac4f970/cluster_f9d48cc7-c0cb-4472-8ae4-bc4e1459bdbf/dfs/data/data1/current/BP-201857868-172.31.14.131-1690240248469] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 23:10:53,761 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39b251-506e-547e-6571-b32ebac4f970/cluster_f9d48cc7-c0cb-4472-8ae4-bc4e1459bdbf/dfs/data/data2/current/BP-201857868-172.31.14.131-1690240248469] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 23:10:53,770 INFO [Listener at localhost/36721] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 23:10:53,887 INFO [Listener at localhost/36721] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-24 23:10:53,920 INFO [Listener at localhost/36721] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-24 23:10:53,920 INFO [Listener at localhost/36721] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-24 23:10:53,920 INFO [Listener at localhost/36721] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39b251-506e-547e-6571-b32ebac4f970/hadoop.log.dir so I do NOT create it in target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88 2023-07-24 23:10:53,920 INFO [Listener at localhost/36721] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39b251-506e-547e-6571-b32ebac4f970/hadoop.tmp.dir so I do NOT create it in target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88 2023-07-24 23:10:53,921 INFO [Listener at localhost/36721] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/cluster_f5253f4f-42b6-b460-06c2-48044aa3ed75, deleteOnExit=true 2023-07-24 23:10:53,921 INFO [Listener at localhost/36721] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-24 23:10:53,921 INFO [Listener at localhost/36721] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/test.cache.data in system properties and HBase conf 2023-07-24 23:10:53,921 INFO [Listener at localhost/36721] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/hadoop.tmp.dir in system properties and HBase conf 2023-07-24 23:10:53,921 INFO [Listener at localhost/36721] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/hadoop.log.dir in system properties and HBase conf 2023-07-24 23:10:53,921 INFO [Listener at localhost/36721] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-24 23:10:53,922 INFO [Listener at localhost/36721] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-24 23:10:53,922 INFO [Listener at localhost/36721] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-24 23:10:53,922 DEBUG [Listener at localhost/36721] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-24 23:10:53,922 INFO [Listener at localhost/36721] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-24 23:10:53,922 INFO [Listener at localhost/36721] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-24 23:10:53,922 INFO [Listener at localhost/36721] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-24 23:10:53,923 INFO [Listener at localhost/36721] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-24 23:10:53,923 INFO [Listener at localhost/36721] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-24 23:10:53,923 INFO [Listener at localhost/36721] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-24 23:10:53,923 INFO [Listener at localhost/36721] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-24 23:10:53,923 INFO [Listener at localhost/36721] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-24 23:10:53,923 INFO [Listener at localhost/36721] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-24 23:10:53,924 INFO [Listener at localhost/36721] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/nfs.dump.dir in system properties and HBase conf 2023-07-24 23:10:53,924 INFO [Listener at localhost/36721] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/java.io.tmpdir in system properties and HBase conf 2023-07-24 23:10:53,924 INFO [Listener at localhost/36721] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-24 23:10:53,924 INFO [Listener at localhost/36721] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-24 23:10:53,924 INFO [Listener at localhost/36721] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-24 23:10:53,952 WARN [Listener at localhost/36721] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-24 23:10:53,953 WARN [Listener at localhost/36721] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-24 23:10:53,985 DEBUG [Listener at localhost/36721-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x1019999f0a6000a, quorum=127.0.0.1:56120, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-24 23:10:53,985 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x1019999f0a6000a, quorum=127.0.0.1:56120, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-24 23:10:54,002 WARN [Listener at localhost/36721] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 23:10:54,004 INFO [Listener at localhost/36721] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 23:10:54,008 INFO [Listener at localhost/36721] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/java.io.tmpdir/Jetty_localhost_40273_hdfs____er4wu4/webapp 2023-07-24 23:10:54,100 INFO [Listener at localhost/36721] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40273 2023-07-24 23:10:54,105 WARN [Listener at localhost/36721] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-24 23:10:54,105 WARN [Listener at localhost/36721] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-24 23:10:54,145 WARN [Listener at localhost/34031] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 23:10:54,155 WARN [Listener at localhost/34031] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 23:10:54,157 WARN [Listener at localhost/34031] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 23:10:54,158 INFO [Listener at localhost/34031] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 23:10:54,162 INFO [Listener at localhost/34031] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/java.io.tmpdir/Jetty_localhost_33221_datanode____.6a4zk8/webapp 2023-07-24 23:10:54,255 INFO [Listener at localhost/34031] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33221 2023-07-24 23:10:54,263 WARN [Listener at localhost/45209] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 23:10:54,277 WARN [Listener at localhost/45209] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 23:10:54,279 WARN [Listener at localhost/45209] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 23:10:54,280 INFO [Listener at localhost/45209] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 23:10:54,283 INFO [Listener at localhost/45209] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/java.io.tmpdir/Jetty_localhost_41879_datanode____ux9zhg/webapp 2023-07-24 23:10:54,370 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x8ffd506831301412: Processing first storage report for DS-89860324-f117-4671-8197-befa8c84a26c from datanode a3d511de-122d-4ff4-8fc5-040fe3fb445e 2023-07-24 23:10:54,370 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x8ffd506831301412: from storage DS-89860324-f117-4671-8197-befa8c84a26c node DatanodeRegistration(127.0.0.1:36877, datanodeUuid=a3d511de-122d-4ff4-8fc5-040fe3fb445e, infoPort=34051, infoSecurePort=0, ipcPort=45209, storageInfo=lv=-57;cid=testClusterID;nsid=781759465;c=1690240253959), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-24 23:10:54,371 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x8ffd506831301412: Processing first storage report for DS-42bb68aa-bfa2-4dd3-a5aa-bb1dece6b126 from datanode a3d511de-122d-4ff4-8fc5-040fe3fb445e 2023-07-24 23:10:54,371 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x8ffd506831301412: from storage DS-42bb68aa-bfa2-4dd3-a5aa-bb1dece6b126 node DatanodeRegistration(127.0.0.1:36877, datanodeUuid=a3d511de-122d-4ff4-8fc5-040fe3fb445e, infoPort=34051, infoSecurePort=0, ipcPort=45209, storageInfo=lv=-57;cid=testClusterID;nsid=781759465;c=1690240253959), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 23:10:54,390 INFO [Listener at localhost/45209] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41879 2023-07-24 23:10:54,399 WARN [Listener at localhost/36475] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 23:10:54,414 WARN [Listener at localhost/36475] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 23:10:54,416 WARN [Listener at localhost/36475] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 23:10:54,417 INFO [Listener at localhost/36475] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 23:10:54,419 INFO [Listener at localhost/36475] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/java.io.tmpdir/Jetty_localhost_45613_datanode____qkewbi/webapp 2023-07-24 23:10:54,494 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe2585b3a3a540ebb: Processing first storage report for DS-16b8c1ad-1fee-4267-a812-e5241a0ea2c8 from datanode 9326d3f2-1f0c-460c-a0e6-afc18ce4b612 2023-07-24 23:10:54,494 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe2585b3a3a540ebb: from storage DS-16b8c1ad-1fee-4267-a812-e5241a0ea2c8 node DatanodeRegistration(127.0.0.1:36005, datanodeUuid=9326d3f2-1f0c-460c-a0e6-afc18ce4b612, infoPort=33981, infoSecurePort=0, ipcPort=36475, storageInfo=lv=-57;cid=testClusterID;nsid=781759465;c=1690240253959), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 23:10:54,494 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe2585b3a3a540ebb: Processing first storage report for DS-09beafa7-5953-4c58-958e-29a94c29fd3c from datanode 9326d3f2-1f0c-460c-a0e6-afc18ce4b612 2023-07-24 23:10:54,494 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe2585b3a3a540ebb: from storage DS-09beafa7-5953-4c58-958e-29a94c29fd3c node DatanodeRegistration(127.0.0.1:36005, datanodeUuid=9326d3f2-1f0c-460c-a0e6-afc18ce4b612, infoPort=33981, infoSecurePort=0, ipcPort=36475, storageInfo=lv=-57;cid=testClusterID;nsid=781759465;c=1690240253959), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 23:10:54,526 INFO [Listener at localhost/36475] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45613 2023-07-24 23:10:54,535 WARN [Listener at localhost/33659] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 23:10:54,643 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6ebd04316d44787: Processing first storage report for DS-f237c4b5-7826-4e6a-ab1f-b40f6e113a72 from datanode a04b25c7-1e04-465b-b09e-ec2725065e25 2023-07-24 23:10:54,643 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6ebd04316d44787: from storage DS-f237c4b5-7826-4e6a-ab1f-b40f6e113a72 node DatanodeRegistration(127.0.0.1:39601, datanodeUuid=a04b25c7-1e04-465b-b09e-ec2725065e25, infoPort=44397, infoSecurePort=0, ipcPort=33659, storageInfo=lv=-57;cid=testClusterID;nsid=781759465;c=1690240253959), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 23:10:54,643 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6ebd04316d44787: Processing first storage report for DS-7fec4ba0-429d-4c64-b19e-bc0703d1febf from datanode a04b25c7-1e04-465b-b09e-ec2725065e25 2023-07-24 23:10:54,643 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6ebd04316d44787: from storage DS-7fec4ba0-429d-4c64-b19e-bc0703d1febf node DatanodeRegistration(127.0.0.1:39601, datanodeUuid=a04b25c7-1e04-465b-b09e-ec2725065e25, infoPort=44397, infoSecurePort=0, ipcPort=33659, storageInfo=lv=-57;cid=testClusterID;nsid=781759465;c=1690240253959), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 23:10:54,645 DEBUG [Listener at localhost/33659] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88 2023-07-24 23:10:54,651 INFO [Listener at localhost/33659] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/cluster_f5253f4f-42b6-b460-06c2-48044aa3ed75/zookeeper_0, clientPort=61494, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/cluster_f5253f4f-42b6-b460-06c2-48044aa3ed75/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/cluster_f5253f4f-42b6-b460-06c2-48044aa3ed75/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-24 23:10:54,652 INFO [Listener at localhost/33659] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=61494 2023-07-24 23:10:54,652 INFO [Listener at localhost/33659] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 23:10:54,653 INFO [Listener at localhost/33659] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 23:10:54,670 INFO [Listener at localhost/33659] util.FSUtils(471): Created version file at hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f with version=8 2023-07-24 23:10:54,670 INFO [Listener at localhost/33659] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:38733/user/jenkins/test-data/259a1c0e-ac00-07ac-9244-9ee2515f4a8c/hbase-staging 2023-07-24 23:10:54,671 DEBUG [Listener at localhost/33659] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-24 23:10:54,671 DEBUG [Listener at localhost/33659] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-24 23:10:54,671 DEBUG [Listener at localhost/33659] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-24 23:10:54,671 DEBUG [Listener at localhost/33659] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-24 23:10:54,672 INFO [Listener at localhost/33659] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 23:10:54,672 INFO [Listener at localhost/33659] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 23:10:54,672 INFO [Listener at localhost/33659] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 23:10:54,672 INFO [Listener at localhost/33659] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 23:10:54,672 INFO [Listener at localhost/33659] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 23:10:54,672 INFO [Listener at localhost/33659] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 23:10:54,672 INFO [Listener at localhost/33659] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 23:10:54,673 INFO [Listener at localhost/33659] ipc.NettyRpcServer(120): Bind to /172.31.14.131:42745 2023-07-24 23:10:54,673 INFO [Listener at localhost/33659] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 23:10:54,674 INFO [Listener at localhost/33659] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 23:10:54,675 INFO [Listener at localhost/33659] zookeeper.RecoverableZooKeeper(93): Process identifier=master:42745 connecting to ZooKeeper ensemble=127.0.0.1:61494 2023-07-24 23:10:54,682 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): master:427450x0, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 23:10:54,682 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:42745-0x101999a05ba0000 connected 2023-07-24 23:10:54,698 DEBUG [Listener at localhost/33659] zookeeper.ZKUtil(164): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 23:10:54,698 DEBUG [Listener at localhost/33659] zookeeper.ZKUtil(164): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 23:10:54,699 DEBUG [Listener at localhost/33659] zookeeper.ZKUtil(164): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 23:10:54,701 DEBUG [Listener at localhost/33659] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42745 2023-07-24 23:10:54,701 DEBUG [Listener at localhost/33659] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42745 2023-07-24 23:10:54,701 DEBUG [Listener at localhost/33659] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42745 2023-07-24 23:10:54,702 DEBUG [Listener at localhost/33659] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42745 2023-07-24 23:10:54,702 DEBUG [Listener at localhost/33659] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42745 2023-07-24 23:10:54,704 INFO [Listener at localhost/33659] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 23:10:54,704 INFO [Listener at localhost/33659] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 23:10:54,704 INFO [Listener at localhost/33659] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 23:10:54,704 INFO [Listener at localhost/33659] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-24 23:10:54,704 INFO [Listener at localhost/33659] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 23:10:54,704 INFO [Listener at localhost/33659] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 23:10:54,705 INFO [Listener at localhost/33659] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 23:10:54,705 INFO [Listener at localhost/33659] http.HttpServer(1146): Jetty bound to port 43107 2023-07-24 23:10:54,705 INFO [Listener at localhost/33659] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 23:10:54,710 INFO [Listener at localhost/33659] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 23:10:54,710 INFO [Listener at localhost/33659] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6c61d2b1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/hadoop.log.dir/,AVAILABLE} 2023-07-24 23:10:54,711 INFO [Listener at localhost/33659] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 23:10:54,711 INFO [Listener at localhost/33659] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7ddd5322{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 23:10:54,826 INFO [Listener at localhost/33659] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 23:10:54,828 INFO [Listener at localhost/33659] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 23:10:54,828 INFO [Listener at localhost/33659] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 23:10:54,828 INFO [Listener at localhost/33659] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 23:10:54,829 INFO [Listener at localhost/33659] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 23:10:54,831 INFO [Listener at localhost/33659] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@f50000{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/java.io.tmpdir/jetty-0_0_0_0-43107-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6717645013118864445/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-24 23:10:54,832 INFO [Listener at localhost/33659] server.AbstractConnector(333): Started ServerConnector@18478842{HTTP/1.1, (http/1.1)}{0.0.0.0:43107} 2023-07-24 23:10:54,832 INFO [Listener at localhost/33659] server.Server(415): Started @42136ms 2023-07-24 23:10:54,832 INFO [Listener at localhost/33659] master.HMaster(444): hbase.rootdir=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f, hbase.cluster.distributed=false 2023-07-24 23:10:54,845 INFO [Listener at localhost/33659] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 23:10:54,845 INFO [Listener at localhost/33659] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 23:10:54,845 INFO [Listener at localhost/33659] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 23:10:54,845 INFO [Listener at localhost/33659] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 23:10:54,845 INFO [Listener at localhost/33659] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 23:10:54,845 INFO [Listener at localhost/33659] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 23:10:54,845 INFO [Listener at localhost/33659] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 23:10:54,846 INFO [Listener at localhost/33659] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34441 2023-07-24 23:10:54,846 INFO [Listener at localhost/33659] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 23:10:54,847 DEBUG [Listener at localhost/33659] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 23:10:54,848 INFO [Listener at localhost/33659] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 23:10:54,849 INFO [Listener at localhost/33659] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 23:10:54,849 INFO [Listener at localhost/33659] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34441 connecting to ZooKeeper ensemble=127.0.0.1:61494 2023-07-24 23:10:54,854 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): regionserver:344410x0, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 23:10:54,856 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34441-0x101999a05ba0001 connected 2023-07-24 23:10:54,856 DEBUG [Listener at localhost/33659] zookeeper.ZKUtil(164): regionserver:34441-0x101999a05ba0001, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 23:10:54,856 DEBUG [Listener at localhost/33659] zookeeper.ZKUtil(164): regionserver:34441-0x101999a05ba0001, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 23:10:54,857 DEBUG [Listener at localhost/33659] zookeeper.ZKUtil(164): regionserver:34441-0x101999a05ba0001, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 23:10:54,857 DEBUG [Listener at localhost/33659] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34441 2023-07-24 23:10:54,857 DEBUG [Listener at localhost/33659] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34441 2023-07-24 23:10:54,857 DEBUG [Listener at localhost/33659] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34441 2023-07-24 23:10:54,858 DEBUG [Listener at localhost/33659] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34441 2023-07-24 23:10:54,858 DEBUG [Listener at localhost/33659] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34441 2023-07-24 23:10:54,860 INFO [Listener at localhost/33659] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 23:10:54,860 INFO [Listener at localhost/33659] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 23:10:54,860 INFO [Listener at localhost/33659] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 23:10:54,860 INFO [Listener at localhost/33659] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 23:10:54,860 INFO [Listener at localhost/33659] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 23:10:54,860 INFO [Listener at localhost/33659] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 23:10:54,860 INFO [Listener at localhost/33659] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 23:10:54,861 INFO [Listener at localhost/33659] http.HttpServer(1146): Jetty bound to port 35873 2023-07-24 23:10:54,861 INFO [Listener at localhost/33659] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 23:10:54,862 INFO [Listener at localhost/33659] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 23:10:54,862 INFO [Listener at localhost/33659] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@61ab34ff{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/hadoop.log.dir/,AVAILABLE} 2023-07-24 23:10:54,862 INFO [Listener at localhost/33659] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 23:10:54,863 INFO [Listener at localhost/33659] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@17847e44{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 23:10:54,975 INFO [Listener at localhost/33659] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 23:10:54,976 INFO [Listener at localhost/33659] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 23:10:54,976 INFO [Listener at localhost/33659] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 23:10:54,976 INFO [Listener at localhost/33659] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 23:10:54,977 INFO [Listener at localhost/33659] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 23:10:54,978 INFO [Listener at localhost/33659] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1d5b597{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/java.io.tmpdir/jetty-0_0_0_0-35873-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2556131898184694381/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 23:10:54,979 INFO [Listener at localhost/33659] server.AbstractConnector(333): Started ServerConnector@473a089d{HTTP/1.1, (http/1.1)}{0.0.0.0:35873} 2023-07-24 23:10:54,979 INFO [Listener at localhost/33659] server.Server(415): Started @42283ms 2023-07-24 23:10:54,991 INFO [Listener at localhost/33659] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 23:10:54,991 INFO [Listener at localhost/33659] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 23:10:54,991 INFO [Listener at localhost/33659] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 23:10:54,991 INFO [Listener at localhost/33659] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 23:10:54,991 INFO [Listener at localhost/33659] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 23:10:54,991 INFO [Listener at localhost/33659] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 23:10:54,991 INFO [Listener at localhost/33659] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 23:10:54,992 INFO [Listener at localhost/33659] ipc.NettyRpcServer(120): Bind to /172.31.14.131:39455 2023-07-24 23:10:54,992 INFO [Listener at localhost/33659] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 23:10:54,993 DEBUG [Listener at localhost/33659] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 23:10:54,994 INFO [Listener at localhost/33659] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 23:10:54,995 INFO [Listener at localhost/33659] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 23:10:54,996 INFO [Listener at localhost/33659] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:39455 connecting to ZooKeeper ensemble=127.0.0.1:61494 2023-07-24 23:10:55,001 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): regionserver:394550x0, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 23:10:55,003 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:39455-0x101999a05ba0002 connected 2023-07-24 23:10:55,003 DEBUG [Listener at localhost/33659] zookeeper.ZKUtil(164): regionserver:39455-0x101999a05ba0002, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 23:10:55,004 DEBUG [Listener at localhost/33659] zookeeper.ZKUtil(164): regionserver:39455-0x101999a05ba0002, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 23:10:55,004 DEBUG [Listener at localhost/33659] zookeeper.ZKUtil(164): regionserver:39455-0x101999a05ba0002, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 23:10:55,005 DEBUG [Listener at localhost/33659] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39455 2023-07-24 23:10:55,005 DEBUG [Listener at localhost/33659] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39455 2023-07-24 23:10:55,006 DEBUG [Listener at localhost/33659] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39455 2023-07-24 23:10:55,006 DEBUG [Listener at localhost/33659] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39455 2023-07-24 23:10:55,006 DEBUG [Listener at localhost/33659] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39455 2023-07-24 23:10:55,008 INFO [Listener at localhost/33659] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 23:10:55,008 INFO [Listener at localhost/33659] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 23:10:55,008 INFO [Listener at localhost/33659] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 23:10:55,008 INFO [Listener at localhost/33659] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 23:10:55,009 INFO [Listener at localhost/33659] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 23:10:55,009 INFO [Listener at localhost/33659] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 23:10:55,009 INFO [Listener at localhost/33659] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 23:10:55,009 INFO [Listener at localhost/33659] http.HttpServer(1146): Jetty bound to port 40261 2023-07-24 23:10:55,009 INFO [Listener at localhost/33659] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 23:10:55,014 INFO [Listener at localhost/33659] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 23:10:55,014 INFO [Listener at localhost/33659] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@39bb29dd{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/hadoop.log.dir/,AVAILABLE} 2023-07-24 23:10:55,014 INFO [Listener at localhost/33659] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 23:10:55,014 INFO [Listener at localhost/33659] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@21da4d95{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 23:10:55,130 INFO [Listener at localhost/33659] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 23:10:55,130 INFO [Listener at localhost/33659] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 23:10:55,131 INFO [Listener at localhost/33659] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 23:10:55,131 INFO [Listener at localhost/33659] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-24 23:10:55,132 INFO [Listener at localhost/33659] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 23:10:55,132 INFO [Listener at localhost/33659] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@5ef7553{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/java.io.tmpdir/jetty-0_0_0_0-40261-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2872125222245222612/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 23:10:55,135 INFO [Listener at localhost/33659] server.AbstractConnector(333): Started ServerConnector@81fc2e1{HTTP/1.1, (http/1.1)}{0.0.0.0:40261} 2023-07-24 23:10:55,135 INFO [Listener at localhost/33659] server.Server(415): Started @42438ms 2023-07-24 23:10:55,146 INFO [Listener at localhost/33659] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 23:10:55,146 INFO [Listener at localhost/33659] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 23:10:55,146 INFO [Listener at localhost/33659] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 23:10:55,147 INFO [Listener at localhost/33659] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 23:10:55,147 INFO [Listener at localhost/33659] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 23:10:55,147 INFO [Listener at localhost/33659] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 23:10:55,147 INFO [Listener at localhost/33659] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 23:10:55,147 INFO [Listener at localhost/33659] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44771 2023-07-24 23:10:55,148 INFO [Listener at localhost/33659] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 23:10:55,149 DEBUG [Listener at localhost/33659] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 23:10:55,149 INFO [Listener at localhost/33659] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 23:10:55,150 INFO [Listener at localhost/33659] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 23:10:55,151 INFO [Listener at localhost/33659] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44771 connecting to ZooKeeper ensemble=127.0.0.1:61494 2023-07-24 23:10:55,154 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): regionserver:447710x0, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 23:10:55,155 DEBUG [Listener at localhost/33659] zookeeper.ZKUtil(164): regionserver:447710x0, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 23:10:55,155 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44771-0x101999a05ba0003 connected 2023-07-24 23:10:55,156 DEBUG [Listener at localhost/33659] zookeeper.ZKUtil(164): regionserver:44771-0x101999a05ba0003, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 23:10:55,156 DEBUG [Listener at localhost/33659] zookeeper.ZKUtil(164): regionserver:44771-0x101999a05ba0003, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 23:10:55,156 DEBUG [Listener at localhost/33659] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44771 2023-07-24 23:10:55,156 DEBUG [Listener at localhost/33659] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44771 2023-07-24 23:10:55,157 DEBUG [Listener at localhost/33659] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44771 2023-07-24 23:10:55,157 DEBUG [Listener at localhost/33659] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44771 2023-07-24 23:10:55,157 DEBUG [Listener at localhost/33659] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44771 2023-07-24 23:10:55,159 INFO [Listener at localhost/33659] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 23:10:55,159 INFO [Listener at localhost/33659] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 23:10:55,159 INFO [Listener at localhost/33659] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 23:10:55,159 INFO [Listener at localhost/33659] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 23:10:55,159 INFO [Listener at localhost/33659] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 23:10:55,159 INFO [Listener at localhost/33659] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 23:10:55,159 INFO [Listener at localhost/33659] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 23:10:55,160 INFO [Listener at localhost/33659] http.HttpServer(1146): Jetty bound to port 37333 2023-07-24 23:10:55,160 INFO [Listener at localhost/33659] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 23:10:55,161 INFO [Listener at localhost/33659] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 23:10:55,161 INFO [Listener at localhost/33659] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@23981551{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/hadoop.log.dir/,AVAILABLE} 2023-07-24 23:10:55,162 INFO [Listener at localhost/33659] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 23:10:55,162 INFO [Listener at localhost/33659] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6b2fe1d6{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 23:10:55,273 INFO [Listener at localhost/33659] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 23:10:55,274 INFO [Listener at localhost/33659] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 23:10:55,274 INFO [Listener at localhost/33659] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 23:10:55,274 INFO [Listener at localhost/33659] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 23:10:55,275 INFO [Listener at localhost/33659] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 23:10:55,276 INFO [Listener at localhost/33659] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@5ecbfcb6{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/java.io.tmpdir/jetty-0_0_0_0-37333-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6001572057400289681/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 23:10:55,277 INFO [Listener at localhost/33659] server.AbstractConnector(333): Started ServerConnector@3095489{HTTP/1.1, (http/1.1)}{0.0.0.0:37333} 2023-07-24 23:10:55,277 INFO [Listener at localhost/33659] server.Server(415): Started @42581ms 2023-07-24 23:10:55,279 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 23:10:55,283 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@9fb3333{HTTP/1.1, (http/1.1)}{0.0.0.0:37817} 2023-07-24 23:10:55,283 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @42587ms 2023-07-24 23:10:55,283 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,42745,1690240254672 2023-07-24 23:10:55,285 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-24 23:10:55,285 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,42745,1690240254672 2023-07-24 23:10:55,287 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): regionserver:34441-0x101999a05ba0001, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 23:10:55,287 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 23:10:55,287 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): regionserver:39455-0x101999a05ba0002, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 23:10:55,287 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): regionserver:44771-0x101999a05ba0003, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 23:10:55,288 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 23:10:55,289 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 23:10:55,291 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,42745,1690240254672 from backup master directory 2023-07-24 23:10:55,291 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 23:10:55,292 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,42745,1690240254672 2023-07-24 23:10:55,292 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-24 23:10:55,292 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 23:10:55,292 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,42745,1690240254672 2023-07-24 23:10:55,309 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/hbase.id with ID: 002b9645-9242-4f1f-8283-3b07dad32c94 2023-07-24 23:10:55,324 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 23:10:55,327 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 23:10:55,342 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x5a33b819 to 127.0.0.1:61494 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 23:10:55,346 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@62ac3421, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 23:10:55,346 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 23:10:55,346 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-24 23:10:55,347 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 23:10:55,348 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/MasterData/data/master/store-tmp 2023-07-24 23:10:55,355 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:55,355 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-24 23:10:55,355 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 23:10:55,356 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 23:10:55,356 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-24 23:10:55,356 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 23:10:55,356 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 23:10:55,356 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 23:10:55,356 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/MasterData/WALs/jenkins-hbase4.apache.org,42745,1690240254672 2023-07-24 23:10:55,358 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42745%2C1690240254672, suffix=, logDir=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/MasterData/WALs/jenkins-hbase4.apache.org,42745,1690240254672, archiveDir=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/MasterData/oldWALs, maxLogs=10 2023-07-24 23:10:55,375 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36005,DS-16b8c1ad-1fee-4267-a812-e5241a0ea2c8,DISK] 2023-07-24 23:10:55,375 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39601,DS-f237c4b5-7826-4e6a-ab1f-b40f6e113a72,DISK] 2023-07-24 23:10:55,375 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36877,DS-89860324-f117-4671-8197-befa8c84a26c,DISK] 2023-07-24 23:10:55,377 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/MasterData/WALs/jenkins-hbase4.apache.org,42745,1690240254672/jenkins-hbase4.apache.org%2C42745%2C1690240254672.1690240255359 2023-07-24 23:10:55,377 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36005,DS-16b8c1ad-1fee-4267-a812-e5241a0ea2c8,DISK], DatanodeInfoWithStorage[127.0.0.1:39601,DS-f237c4b5-7826-4e6a-ab1f-b40f6e113a72,DISK], DatanodeInfoWithStorage[127.0.0.1:36877,DS-89860324-f117-4671-8197-befa8c84a26c,DISK]] 2023-07-24 23:10:55,377 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-24 23:10:55,377 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:55,377 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 23:10:55,377 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 23:10:55,378 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-24 23:10:55,380 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-24 23:10:55,380 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-24 23:10:55,381 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:55,381 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-24 23:10:55,381 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-24 23:10:55,384 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 23:10:55,385 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 23:10:55,386 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9885044960, jitterRate=-0.07938344776630402}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 23:10:55,386 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 23:10:55,386 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-24 23:10:55,387 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-24 23:10:55,387 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-24 23:10:55,387 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-24 23:10:55,387 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-24 23:10:55,387 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-24 23:10:55,387 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-24 23:10:55,388 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-24 23:10:55,389 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-24 23:10:55,390 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-24 23:10:55,390 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-24 23:10:55,390 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-24 23:10:55,392 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 23:10:55,393 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-24 23:10:55,393 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-24 23:10:55,394 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-24 23:10:55,395 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): regionserver:34441-0x101999a05ba0001, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 23:10:55,395 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 23:10:55,395 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): regionserver:39455-0x101999a05ba0002, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 23:10:55,395 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): regionserver:44771-0x101999a05ba0003, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 23:10:55,395 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 23:10:55,395 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,42745,1690240254672, sessionid=0x101999a05ba0000, setting cluster-up flag (Was=false) 2023-07-24 23:10:55,399 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 23:10:55,404 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-24 23:10:55,404 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,42745,1690240254672 2023-07-24 23:10:55,411 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 23:10:55,416 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-24 23:10:55,416 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,42745,1690240254672 2023-07-24 23:10:55,417 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/.hbase-snapshot/.tmp 2023-07-24 23:10:55,419 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-24 23:10:55,419 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-24 23:10:55,419 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-24 23:10:55,420 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42745,1690240254672] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 23:10:55,420 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-24 23:10:55,421 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-24 23:10:55,432 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-24 23:10:55,432 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-24 23:10:55,432 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-24 23:10:55,432 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-24 23:10:55,432 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 23:10:55,432 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 23:10:55,432 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 23:10:55,432 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 23:10:55,432 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-24 23:10:55,432 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:55,432 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 23:10:55,432 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:55,442 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1690240285442 2023-07-24 23:10:55,442 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-24 23:10:55,442 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-24 23:10:55,443 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-24 23:10:55,443 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-24 23:10:55,443 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-24 23:10:55,443 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-24 23:10:55,445 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:55,445 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-24 23:10:55,445 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-24 23:10:55,446 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-24 23:10:55,446 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-24 23:10:55,446 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-24 23:10:55,446 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-24 23:10:55,446 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-24 23:10:55,446 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690240255446,5,FailOnTimeoutGroup] 2023-07-24 23:10:55,447 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-24 23:10:55,447 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690240255446,5,FailOnTimeoutGroup] 2023-07-24 23:10:55,447 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:55,447 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-24 23:10:55,447 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:55,447 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:55,464 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-24 23:10:55,465 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-24 23:10:55,465 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f 2023-07-24 23:10:55,477 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:55,481 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-24 23:10:55,487 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/hbase/meta/1588230740/info 2023-07-24 23:10:55,487 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-24 23:10:55,488 INFO [RS:0;jenkins-hbase4:34441] regionserver.HRegionServer(951): ClusterId : 002b9645-9242-4f1f-8283-3b07dad32c94 2023-07-24 23:10:55,488 DEBUG [RS:0;jenkins-hbase4:34441] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 23:10:55,490 INFO [RS:2;jenkins-hbase4:44771] regionserver.HRegionServer(951): ClusterId : 002b9645-9242-4f1f-8283-3b07dad32c94 2023-07-24 23:10:55,490 DEBUG [RS:2;jenkins-hbase4:44771] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 23:10:55,490 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:55,491 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-24 23:10:55,492 INFO [RS:1;jenkins-hbase4:39455] regionserver.HRegionServer(951): ClusterId : 002b9645-9242-4f1f-8283-3b07dad32c94 2023-07-24 23:10:55,492 DEBUG [RS:1;jenkins-hbase4:39455] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 23:10:55,492 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/hbase/meta/1588230740/rep_barrier 2023-07-24 23:10:55,493 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-24 23:10:55,494 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:55,494 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-24 23:10:55,494 DEBUG [RS:2;jenkins-hbase4:44771] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 23:10:55,494 DEBUG [RS:2;jenkins-hbase4:44771] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 23:10:55,494 DEBUG [RS:0;jenkins-hbase4:34441] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 23:10:55,494 DEBUG [RS:0;jenkins-hbase4:34441] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 23:10:55,495 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/hbase/meta/1588230740/table 2023-07-24 23:10:55,496 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-24 23:10:55,496 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:55,497 DEBUG [RS:1;jenkins-hbase4:39455] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 23:10:55,497 DEBUG [RS:1;jenkins-hbase4:39455] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 23:10:55,497 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/hbase/meta/1588230740 2023-07-24 23:10:55,497 DEBUG [RS:2;jenkins-hbase4:44771] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 23:10:55,499 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/hbase/meta/1588230740 2023-07-24 23:10:55,500 DEBUG [RS:2;jenkins-hbase4:44771] zookeeper.ReadOnlyZKClient(139): Connect 0x1162d826 to 127.0.0.1:61494 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 23:10:55,500 DEBUG [RS:1;jenkins-hbase4:39455] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 23:10:55,502 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-24 23:10:55,517 DEBUG [RS:0;jenkins-hbase4:34441] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 23:10:55,529 DEBUG [RS:1;jenkins-hbase4:39455] zookeeper.ReadOnlyZKClient(139): Connect 0x7be39a23 to 127.0.0.1:61494 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 23:10:55,529 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-24 23:10:55,529 DEBUG [RS:0;jenkins-hbase4:34441] zookeeper.ReadOnlyZKClient(139): Connect 0x0fc2a32e to 127.0.0.1:61494 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 23:10:55,541 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 23:10:55,542 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11161693440, jitterRate=0.03951370716094971}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-24 23:10:55,542 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-24 23:10:55,542 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-24 23:10:55,542 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-24 23:10:55,542 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-24 23:10:55,542 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-24 23:10:55,542 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-24 23:10:55,543 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-24 23:10:55,543 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-24 23:10:55,544 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-24 23:10:55,544 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-24 23:10:55,544 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-24 23:10:55,546 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-24 23:10:55,547 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-24 23:10:55,548 DEBUG [RS:0;jenkins-hbase4:34441] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@d83beeb, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 23:10:55,549 DEBUG [RS:0;jenkins-hbase4:34441] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3e87094f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 23:10:55,553 DEBUG [RS:1;jenkins-hbase4:39455] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4426b94, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 23:10:55,554 DEBUG [RS:1;jenkins-hbase4:39455] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4ad26554, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 23:10:55,555 DEBUG [RS:2;jenkins-hbase4:44771] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1fa5347a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 23:10:55,555 DEBUG [RS:2;jenkins-hbase4:44771] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1b9ab356, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 23:10:55,565 DEBUG [RS:0;jenkins-hbase4:34441] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:34441 2023-07-24 23:10:55,565 INFO [RS:0;jenkins-hbase4:34441] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 23:10:55,565 INFO [RS:0;jenkins-hbase4:34441] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 23:10:55,565 DEBUG [RS:0;jenkins-hbase4:34441] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 23:10:55,566 INFO [RS:0;jenkins-hbase4:34441] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42745,1690240254672 with isa=jenkins-hbase4.apache.org/172.31.14.131:34441, startcode=1690240254844 2023-07-24 23:10:55,566 DEBUG [RS:1;jenkins-hbase4:39455] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:39455 2023-07-24 23:10:55,566 DEBUG [RS:0;jenkins-hbase4:34441] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 23:10:55,566 INFO [RS:1;jenkins-hbase4:39455] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 23:10:55,566 INFO [RS:1;jenkins-hbase4:39455] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 23:10:55,566 DEBUG [RS:1;jenkins-hbase4:39455] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 23:10:55,567 INFO [RS:1;jenkins-hbase4:39455] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42745,1690240254672 with isa=jenkins-hbase4.apache.org/172.31.14.131:39455, startcode=1690240254990 2023-07-24 23:10:55,567 DEBUG [RS:1;jenkins-hbase4:39455] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 23:10:55,568 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59431, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 23:10:55,568 DEBUG [RS:2;jenkins-hbase4:44771] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:44771 2023-07-24 23:10:55,568 INFO [RS:2;jenkins-hbase4:44771] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 23:10:55,568 INFO [RS:2;jenkins-hbase4:44771] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 23:10:55,568 DEBUG [RS:2;jenkins-hbase4:44771] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 23:10:55,575 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42745] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,34441,1690240254844 2023-07-24 23:10:55,575 DEBUG [RS:0;jenkins-hbase4:34441] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f 2023-07-24 23:10:55,575 DEBUG [RS:0;jenkins-hbase4:34441] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:34031 2023-07-24 23:10:55,575 DEBUG [RS:0;jenkins-hbase4:34441] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43107 2023-07-24 23:10:55,577 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 23:10:55,577 DEBUG [RS:0;jenkins-hbase4:34441] zookeeper.ZKUtil(162): regionserver:34441-0x101999a05ba0001, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34441,1690240254844 2023-07-24 23:10:55,578 WARN [RS:0;jenkins-hbase4:34441] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 23:10:55,578 INFO [RS:0;jenkins-hbase4:34441] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 23:10:55,578 DEBUG [RS:0;jenkins-hbase4:34441] regionserver.HRegionServer(1948): logDir=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/WALs/jenkins-hbase4.apache.org,34441,1690240254844 2023-07-24 23:10:55,578 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42745,1690240254672] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 23:10:55,581 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42745,1690240254672] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-24 23:10:55,582 INFO [RS:2;jenkins-hbase4:44771] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42745,1690240254672 with isa=jenkins-hbase4.apache.org/172.31.14.131:44771, startcode=1690240255146 2023-07-24 23:10:55,582 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,34441,1690240254844] 2023-07-24 23:10:55,582 DEBUG [RS:2;jenkins-hbase4:44771] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 23:10:55,583 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41019, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 23:10:55,583 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42745] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,39455,1690240254990 2023-07-24 23:10:55,583 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42745,1690240254672] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 23:10:55,583 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42745,1690240254672] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-24 23:10:55,583 DEBUG [RS:1;jenkins-hbase4:39455] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f 2023-07-24 23:10:55,584 DEBUG [RS:1;jenkins-hbase4:39455] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:34031 2023-07-24 23:10:55,584 DEBUG [RS:1;jenkins-hbase4:39455] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43107 2023-07-24 23:10:55,584 DEBUG [RS:0;jenkins-hbase4:34441] zookeeper.ZKUtil(162): regionserver:34441-0x101999a05ba0001, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34441,1690240254844 2023-07-24 23:10:55,587 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38369, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 23:10:55,587 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42745] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,44771,1690240255146 2023-07-24 23:10:55,587 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42745,1690240254672] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 23:10:55,587 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42745,1690240254672] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-24 23:10:55,587 DEBUG [RS:2;jenkins-hbase4:44771] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f 2023-07-24 23:10:55,587 DEBUG [RS:2;jenkins-hbase4:44771] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:34031 2023-07-24 23:10:55,587 DEBUG [RS:2;jenkins-hbase4:44771] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43107 2023-07-24 23:10:55,590 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): regionserver:34441-0x101999a05ba0001, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 23:10:55,590 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 23:10:55,591 DEBUG [RS:1;jenkins-hbase4:39455] zookeeper.ZKUtil(162): regionserver:39455-0x101999a05ba0002, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39455,1690240254990 2023-07-24 23:10:55,591 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,44771,1690240255146] 2023-07-24 23:10:55,592 WARN [RS:1;jenkins-hbase4:39455] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 23:10:55,592 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,39455,1690240254990] 2023-07-24 23:10:55,592 INFO [RS:1;jenkins-hbase4:39455] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 23:10:55,592 DEBUG [RS:2;jenkins-hbase4:44771] zookeeper.ZKUtil(162): regionserver:44771-0x101999a05ba0003, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44771,1690240255146 2023-07-24 23:10:55,592 DEBUG [RS:0;jenkins-hbase4:34441] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 23:10:55,592 DEBUG [RS:1;jenkins-hbase4:39455] regionserver.HRegionServer(1948): logDir=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/WALs/jenkins-hbase4.apache.org,39455,1690240254990 2023-07-24 23:10:55,592 WARN [RS:2;jenkins-hbase4:44771] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 23:10:55,592 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34441-0x101999a05ba0001, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44771,1690240255146 2023-07-24 23:10:55,592 INFO [RS:0;jenkins-hbase4:34441] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 23:10:55,592 INFO [RS:2;jenkins-hbase4:44771] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 23:10:55,593 DEBUG [RS:2;jenkins-hbase4:44771] regionserver.HRegionServer(1948): logDir=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/WALs/jenkins-hbase4.apache.org,44771,1690240255146 2023-07-24 23:10:55,594 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34441-0x101999a05ba0001, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34441,1690240254844 2023-07-24 23:10:55,598 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34441-0x101999a05ba0001, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39455,1690240254990 2023-07-24 23:10:55,598 INFO [RS:0;jenkins-hbase4:34441] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 23:10:55,600 INFO [RS:0;jenkins-hbase4:34441] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 23:10:55,600 INFO [RS:0;jenkins-hbase4:34441] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:55,601 DEBUG [RS:2;jenkins-hbase4:44771] zookeeper.ZKUtil(162): regionserver:44771-0x101999a05ba0003, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44771,1690240255146 2023-07-24 23:10:55,601 DEBUG [RS:2;jenkins-hbase4:44771] zookeeper.ZKUtil(162): regionserver:44771-0x101999a05ba0003, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34441,1690240254844 2023-07-24 23:10:55,602 DEBUG [RS:2;jenkins-hbase4:44771] zookeeper.ZKUtil(162): regionserver:44771-0x101999a05ba0003, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39455,1690240254990 2023-07-24 23:10:55,602 DEBUG [RS:2;jenkins-hbase4:44771] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 23:10:55,603 INFO [RS:0;jenkins-hbase4:34441] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 23:10:55,603 INFO [RS:2;jenkins-hbase4:44771] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 23:10:55,603 INFO [RS:0;jenkins-hbase4:34441] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:55,604 DEBUG [RS:1;jenkins-hbase4:39455] zookeeper.ZKUtil(162): regionserver:39455-0x101999a05ba0002, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44771,1690240255146 2023-07-24 23:10:55,604 DEBUG [RS:0;jenkins-hbase4:34441] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:55,604 DEBUG [RS:0;jenkins-hbase4:34441] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:55,604 DEBUG [RS:0;jenkins-hbase4:34441] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:55,604 DEBUG [RS:0;jenkins-hbase4:34441] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:55,604 DEBUG [RS:0;jenkins-hbase4:34441] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:55,604 DEBUG [RS:1;jenkins-hbase4:39455] zookeeper.ZKUtil(162): regionserver:39455-0x101999a05ba0002, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34441,1690240254844 2023-07-24 23:10:55,604 DEBUG [RS:0;jenkins-hbase4:34441] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 23:10:55,604 DEBUG [RS:0;jenkins-hbase4:34441] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:55,604 DEBUG [RS:0;jenkins-hbase4:34441] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:55,604 DEBUG [RS:0;jenkins-hbase4:34441] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:55,604 DEBUG [RS:0;jenkins-hbase4:34441] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:55,604 DEBUG [RS:1;jenkins-hbase4:39455] zookeeper.ZKUtil(162): regionserver:39455-0x101999a05ba0002, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39455,1690240254990 2023-07-24 23:10:55,606 INFO [RS:2;jenkins-hbase4:44771] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 23:10:55,607 INFO [RS:0;jenkins-hbase4:34441] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:55,607 INFO [RS:0;jenkins-hbase4:34441] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:55,607 INFO [RS:0;jenkins-hbase4:34441] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:55,608 INFO [RS:2;jenkins-hbase4:44771] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 23:10:55,608 INFO [RS:2;jenkins-hbase4:44771] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:55,614 INFO [RS:2;jenkins-hbase4:44771] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 23:10:55,616 DEBUG [RS:1;jenkins-hbase4:39455] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 23:10:55,616 INFO [RS:2;jenkins-hbase4:44771] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:55,616 INFO [RS:1;jenkins-hbase4:39455] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 23:10:55,619 DEBUG [RS:2;jenkins-hbase4:44771] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:55,623 DEBUG [RS:2;jenkins-hbase4:44771] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:55,623 DEBUG [RS:2;jenkins-hbase4:44771] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:55,623 DEBUG [RS:2;jenkins-hbase4:44771] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:55,623 DEBUG [RS:2;jenkins-hbase4:44771] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:55,623 DEBUG [RS:2;jenkins-hbase4:44771] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 23:10:55,623 DEBUG [RS:2;jenkins-hbase4:44771] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:55,623 DEBUG [RS:2;jenkins-hbase4:44771] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:55,623 DEBUG [RS:2;jenkins-hbase4:44771] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:55,623 DEBUG [RS:2;jenkins-hbase4:44771] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:55,626 INFO [RS:1;jenkins-hbase4:39455] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 23:10:55,626 INFO [RS:2;jenkins-hbase4:44771] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:55,626 INFO [RS:2;jenkins-hbase4:44771] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:55,627 INFO [RS:2;jenkins-hbase4:44771] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:55,627 INFO [RS:1;jenkins-hbase4:39455] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 23:10:55,627 INFO [RS:1;jenkins-hbase4:39455] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:55,627 INFO [RS:1;jenkins-hbase4:39455] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 23:10:55,628 INFO [RS:1;jenkins-hbase4:39455] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:55,628 DEBUG [RS:1;jenkins-hbase4:39455] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:55,628 DEBUG [RS:1;jenkins-hbase4:39455] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:55,628 DEBUG [RS:1;jenkins-hbase4:39455] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:55,629 DEBUG [RS:1;jenkins-hbase4:39455] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:55,629 DEBUG [RS:1;jenkins-hbase4:39455] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:55,629 DEBUG [RS:1;jenkins-hbase4:39455] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 23:10:55,629 DEBUG [RS:1;jenkins-hbase4:39455] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:55,629 DEBUG [RS:1;jenkins-hbase4:39455] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:55,629 DEBUG [RS:1;jenkins-hbase4:39455] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:55,629 DEBUG [RS:1;jenkins-hbase4:39455] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:55,629 INFO [RS:0;jenkins-hbase4:34441] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 23:10:55,630 INFO [RS:0;jenkins-hbase4:34441] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34441,1690240254844-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:55,636 INFO [RS:1;jenkins-hbase4:39455] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:55,636 INFO [RS:1;jenkins-hbase4:39455] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:55,636 INFO [RS:1;jenkins-hbase4:39455] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:55,649 INFO [RS:2;jenkins-hbase4:44771] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 23:10:55,649 INFO [RS:2;jenkins-hbase4:44771] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44771,1690240255146-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:55,652 INFO [RS:1;jenkins-hbase4:39455] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 23:10:55,652 INFO [RS:1;jenkins-hbase4:39455] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39455,1690240254990-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:55,664 INFO [RS:0;jenkins-hbase4:34441] regionserver.Replication(203): jenkins-hbase4.apache.org,34441,1690240254844 started 2023-07-24 23:10:55,664 INFO [RS:0;jenkins-hbase4:34441] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,34441,1690240254844, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:34441, sessionid=0x101999a05ba0001 2023-07-24 23:10:55,664 DEBUG [RS:0;jenkins-hbase4:34441] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 23:10:55,664 DEBUG [RS:0;jenkins-hbase4:34441] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,34441,1690240254844 2023-07-24 23:10:55,664 DEBUG [RS:0;jenkins-hbase4:34441] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34441,1690240254844' 2023-07-24 23:10:55,664 DEBUG [RS:0;jenkins-hbase4:34441] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 23:10:55,665 DEBUG [RS:0;jenkins-hbase4:34441] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 23:10:55,665 DEBUG [RS:0;jenkins-hbase4:34441] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 23:10:55,666 DEBUG [RS:0;jenkins-hbase4:34441] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 23:10:55,666 DEBUG [RS:0;jenkins-hbase4:34441] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,34441,1690240254844 2023-07-24 23:10:55,666 DEBUG [RS:0;jenkins-hbase4:34441] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34441,1690240254844' 2023-07-24 23:10:55,666 DEBUG [RS:0;jenkins-hbase4:34441] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 23:10:55,666 DEBUG [RS:0;jenkins-hbase4:34441] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 23:10:55,671 DEBUG [RS:0;jenkins-hbase4:34441] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 23:10:55,671 INFO [RS:0;jenkins-hbase4:34441] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 23:10:55,671 INFO [RS:0;jenkins-hbase4:34441] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 23:10:55,671 INFO [RS:2;jenkins-hbase4:44771] regionserver.Replication(203): jenkins-hbase4.apache.org,44771,1690240255146 started 2023-07-24 23:10:55,671 INFO [RS:2;jenkins-hbase4:44771] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,44771,1690240255146, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:44771, sessionid=0x101999a05ba0003 2023-07-24 23:10:55,672 DEBUG [RS:2;jenkins-hbase4:44771] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 23:10:55,672 DEBUG [RS:2;jenkins-hbase4:44771] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,44771,1690240255146 2023-07-24 23:10:55,672 DEBUG [RS:2;jenkins-hbase4:44771] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44771,1690240255146' 2023-07-24 23:10:55,672 DEBUG [RS:2;jenkins-hbase4:44771] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 23:10:55,672 DEBUG [RS:2;jenkins-hbase4:44771] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 23:10:55,673 DEBUG [RS:2;jenkins-hbase4:44771] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 23:10:55,673 DEBUG [RS:2;jenkins-hbase4:44771] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 23:10:55,673 DEBUG [RS:2;jenkins-hbase4:44771] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,44771,1690240255146 2023-07-24 23:10:55,673 DEBUG [RS:2;jenkins-hbase4:44771] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44771,1690240255146' 2023-07-24 23:10:55,673 DEBUG [RS:2;jenkins-hbase4:44771] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 23:10:55,673 DEBUG [RS:2;jenkins-hbase4:44771] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 23:10:55,673 DEBUG [RS:2;jenkins-hbase4:44771] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 23:10:55,673 INFO [RS:2;jenkins-hbase4:44771] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 23:10:55,673 INFO [RS:2;jenkins-hbase4:44771] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 23:10:55,674 INFO [RS:1;jenkins-hbase4:39455] regionserver.Replication(203): jenkins-hbase4.apache.org,39455,1690240254990 started 2023-07-24 23:10:55,674 INFO [RS:1;jenkins-hbase4:39455] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,39455,1690240254990, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:39455, sessionid=0x101999a05ba0002 2023-07-24 23:10:55,675 DEBUG [RS:1;jenkins-hbase4:39455] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 23:10:55,675 DEBUG [RS:1;jenkins-hbase4:39455] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,39455,1690240254990 2023-07-24 23:10:55,675 DEBUG [RS:1;jenkins-hbase4:39455] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39455,1690240254990' 2023-07-24 23:10:55,675 DEBUG [RS:1;jenkins-hbase4:39455] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 23:10:55,675 DEBUG [RS:1;jenkins-hbase4:39455] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 23:10:55,675 DEBUG [RS:1;jenkins-hbase4:39455] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 23:10:55,676 DEBUG [RS:1;jenkins-hbase4:39455] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 23:10:55,676 DEBUG [RS:1;jenkins-hbase4:39455] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,39455,1690240254990 2023-07-24 23:10:55,676 DEBUG [RS:1;jenkins-hbase4:39455] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39455,1690240254990' 2023-07-24 23:10:55,676 DEBUG [RS:1;jenkins-hbase4:39455] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 23:10:55,677 DEBUG [RS:1;jenkins-hbase4:39455] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 23:10:55,677 DEBUG [RS:1;jenkins-hbase4:39455] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 23:10:55,677 INFO [RS:1;jenkins-hbase4:39455] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 23:10:55,677 INFO [RS:1;jenkins-hbase4:39455] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 23:10:55,698 DEBUG [jenkins-hbase4:42745] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-24 23:10:55,698 DEBUG [jenkins-hbase4:42745] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 23:10:55,698 DEBUG [jenkins-hbase4:42745] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 23:10:55,698 DEBUG [jenkins-hbase4:42745] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 23:10:55,698 DEBUG [jenkins-hbase4:42745] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 23:10:55,698 DEBUG [jenkins-hbase4:42745] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 23:10:55,699 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,44771,1690240255146, state=OPENING 2023-07-24 23:10:55,701 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-24 23:10:55,702 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 23:10:55,702 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 23:10:55,702 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,44771,1690240255146}] 2023-07-24 23:10:55,729 WARN [ReadOnlyZKClient-127.0.0.1:61494@0x5a33b819] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-24 23:10:55,730 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42745,1690240254672] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 23:10:55,731 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53926, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 23:10:55,731 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=44771] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:53926 deadline: 1690240315731, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,44771,1690240255146 2023-07-24 23:10:55,773 INFO [RS:0;jenkins-hbase4:34441] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34441%2C1690240254844, suffix=, logDir=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/WALs/jenkins-hbase4.apache.org,34441,1690240254844, archiveDir=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/oldWALs, maxLogs=32 2023-07-24 23:10:55,775 INFO [RS:2;jenkins-hbase4:44771] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44771%2C1690240255146, suffix=, logDir=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/WALs/jenkins-hbase4.apache.org,44771,1690240255146, archiveDir=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/oldWALs, maxLogs=32 2023-07-24 23:10:55,779 INFO [RS:1;jenkins-hbase4:39455] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39455%2C1690240254990, suffix=, logDir=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/WALs/jenkins-hbase4.apache.org,39455,1690240254990, archiveDir=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/oldWALs, maxLogs=32 2023-07-24 23:10:55,792 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36005,DS-16b8c1ad-1fee-4267-a812-e5241a0ea2c8,DISK] 2023-07-24 23:10:55,793 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36877,DS-89860324-f117-4671-8197-befa8c84a26c,DISK] 2023-07-24 23:10:55,795 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36005,DS-16b8c1ad-1fee-4267-a812-e5241a0ea2c8,DISK] 2023-07-24 23:10:55,800 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39601,DS-f237c4b5-7826-4e6a-ab1f-b40f6e113a72,DISK] 2023-07-24 23:10:55,800 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39601,DS-f237c4b5-7826-4e6a-ab1f-b40f6e113a72,DISK] 2023-07-24 23:10:55,803 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36877,DS-89860324-f117-4671-8197-befa8c84a26c,DISK] 2023-07-24 23:10:55,807 INFO [RS:2;jenkins-hbase4:44771] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/WALs/jenkins-hbase4.apache.org,44771,1690240255146/jenkins-hbase4.apache.org%2C44771%2C1690240255146.1690240255775 2023-07-24 23:10:55,808 DEBUG [RS:2;jenkins-hbase4:44771] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36877,DS-89860324-f117-4671-8197-befa8c84a26c,DISK], DatanodeInfoWithStorage[127.0.0.1:36005,DS-16b8c1ad-1fee-4267-a812-e5241a0ea2c8,DISK], DatanodeInfoWithStorage[127.0.0.1:39601,DS-f237c4b5-7826-4e6a-ab1f-b40f6e113a72,DISK]] 2023-07-24 23:10:55,809 INFO [RS:0;jenkins-hbase4:34441] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/WALs/jenkins-hbase4.apache.org,34441,1690240254844/jenkins-hbase4.apache.org%2C34441%2C1690240254844.1690240255773 2023-07-24 23:10:55,810 DEBUG [RS:0;jenkins-hbase4:34441] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39601,DS-f237c4b5-7826-4e6a-ab1f-b40f6e113a72,DISK], DatanodeInfoWithStorage[127.0.0.1:36005,DS-16b8c1ad-1fee-4267-a812-e5241a0ea2c8,DISK], DatanodeInfoWithStorage[127.0.0.1:36877,DS-89860324-f117-4671-8197-befa8c84a26c,DISK]] 2023-07-24 23:10:55,814 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39601,DS-f237c4b5-7826-4e6a-ab1f-b40f6e113a72,DISK] 2023-07-24 23:10:55,814 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36877,DS-89860324-f117-4671-8197-befa8c84a26c,DISK] 2023-07-24 23:10:55,814 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36005,DS-16b8c1ad-1fee-4267-a812-e5241a0ea2c8,DISK] 2023-07-24 23:10:55,818 INFO [RS:1;jenkins-hbase4:39455] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/WALs/jenkins-hbase4.apache.org,39455,1690240254990/jenkins-hbase4.apache.org%2C39455%2C1690240254990.1690240255779 2023-07-24 23:10:55,818 DEBUG [RS:1;jenkins-hbase4:39455] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36005,DS-16b8c1ad-1fee-4267-a812-e5241a0ea2c8,DISK], DatanodeInfoWithStorage[127.0.0.1:36877,DS-89860324-f117-4671-8197-befa8c84a26c,DISK], DatanodeInfoWithStorage[127.0.0.1:39601,DS-f237c4b5-7826-4e6a-ab1f-b40f6e113a72,DISK]] 2023-07-24 23:10:55,857 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,44771,1690240255146 2023-07-24 23:10:55,859 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 23:10:55,860 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53942, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 23:10:55,864 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-24 23:10:55,864 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 23:10:55,865 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44771%2C1690240255146.meta, suffix=.meta, logDir=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/WALs/jenkins-hbase4.apache.org,44771,1690240255146, archiveDir=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/oldWALs, maxLogs=32 2023-07-24 23:10:55,879 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36877,DS-89860324-f117-4671-8197-befa8c84a26c,DISK] 2023-07-24 23:10:55,879 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39601,DS-f237c4b5-7826-4e6a-ab1f-b40f6e113a72,DISK] 2023-07-24 23:10:55,880 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36005,DS-16b8c1ad-1fee-4267-a812-e5241a0ea2c8,DISK] 2023-07-24 23:10:55,882 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/WALs/jenkins-hbase4.apache.org,44771,1690240255146/jenkins-hbase4.apache.org%2C44771%2C1690240255146.meta.1690240255866.meta 2023-07-24 23:10:55,882 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36877,DS-89860324-f117-4671-8197-befa8c84a26c,DISK], DatanodeInfoWithStorage[127.0.0.1:36005,DS-16b8c1ad-1fee-4267-a812-e5241a0ea2c8,DISK], DatanodeInfoWithStorage[127.0.0.1:39601,DS-f237c4b5-7826-4e6a-ab1f-b40f6e113a72,DISK]] 2023-07-24 23:10:55,882 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-24 23:10:55,882 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 23:10:55,882 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-24 23:10:55,883 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-24 23:10:55,883 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-24 23:10:55,883 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:55,883 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-24 23:10:55,883 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-24 23:10:55,884 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-24 23:10:55,885 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/hbase/meta/1588230740/info 2023-07-24 23:10:55,885 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/hbase/meta/1588230740/info 2023-07-24 23:10:55,885 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-24 23:10:55,886 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:55,886 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-24 23:10:55,886 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/hbase/meta/1588230740/rep_barrier 2023-07-24 23:10:55,886 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/hbase/meta/1588230740/rep_barrier 2023-07-24 23:10:55,887 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-24 23:10:55,887 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:55,887 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-24 23:10:55,888 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/hbase/meta/1588230740/table 2023-07-24 23:10:55,888 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/hbase/meta/1588230740/table 2023-07-24 23:10:55,889 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-24 23:10:55,889 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:55,890 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/hbase/meta/1588230740 2023-07-24 23:10:55,891 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/hbase/meta/1588230740 2023-07-24 23:10:55,893 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-24 23:10:55,894 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-24 23:10:55,895 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10273093440, jitterRate=-0.043243616819381714}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-24 23:10:55,895 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-24 23:10:55,895 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1690240255857 2023-07-24 23:10:55,899 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-24 23:10:55,900 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-24 23:10:55,900 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,44771,1690240255146, state=OPEN 2023-07-24 23:10:55,901 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-24 23:10:55,901 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 23:10:55,903 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-24 23:10:55,903 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,44771,1690240255146 in 199 msec 2023-07-24 23:10:55,904 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-24 23:10:55,904 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 359 msec 2023-07-24 23:10:55,907 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 485 msec 2023-07-24 23:10:55,907 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1690240255907, completionTime=-1 2023-07-24 23:10:55,907 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-24 23:10:55,907 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-24 23:10:55,911 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-24 23:10:55,911 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1690240315911 2023-07-24 23:10:55,911 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1690240375911 2023-07-24 23:10:55,911 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 4 msec 2023-07-24 23:10:55,917 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42745,1690240254672-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:55,917 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42745,1690240254672-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:55,917 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42745,1690240254672-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:55,918 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:42745, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:55,918 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:55,918 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-24 23:10:55,918 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-24 23:10:55,919 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-24 23:10:55,921 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-24 23:10:55,921 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 23:10:55,922 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 23:10:55,923 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/.tmp/data/hbase/namespace/36cfb92311c6e84a9b8aed595f98e6f3 2023-07-24 23:10:55,924 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/.tmp/data/hbase/namespace/36cfb92311c6e84a9b8aed595f98e6f3 empty. 2023-07-24 23:10:55,924 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/.tmp/data/hbase/namespace/36cfb92311c6e84a9b8aed595f98e6f3 2023-07-24 23:10:55,924 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-24 23:10:55,942 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-24 23:10:55,944 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 36cfb92311c6e84a9b8aed595f98e6f3, NAME => 'hbase:namespace,,1690240255918.36cfb92311c6e84a9b8aed595f98e6f3.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/.tmp 2023-07-24 23:10:55,958 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690240255918.36cfb92311c6e84a9b8aed595f98e6f3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:55,959 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 36cfb92311c6e84a9b8aed595f98e6f3, disabling compactions & flushes 2023-07-24 23:10:55,959 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690240255918.36cfb92311c6e84a9b8aed595f98e6f3. 2023-07-24 23:10:55,959 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690240255918.36cfb92311c6e84a9b8aed595f98e6f3. 2023-07-24 23:10:55,959 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690240255918.36cfb92311c6e84a9b8aed595f98e6f3. after waiting 0 ms 2023-07-24 23:10:55,959 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690240255918.36cfb92311c6e84a9b8aed595f98e6f3. 2023-07-24 23:10:55,959 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690240255918.36cfb92311c6e84a9b8aed595f98e6f3. 2023-07-24 23:10:55,959 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 36cfb92311c6e84a9b8aed595f98e6f3: 2023-07-24 23:10:55,961 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 23:10:55,962 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1690240255918.36cfb92311c6e84a9b8aed595f98e6f3.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690240255962"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240255962"}]},"ts":"1690240255962"} 2023-07-24 23:10:55,965 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 23:10:55,965 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 23:10:55,966 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690240255965"}]},"ts":"1690240255965"} 2023-07-24 23:10:55,967 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-24 23:10:55,970 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 23:10:55,970 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 23:10:55,970 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 23:10:55,970 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 23:10:55,970 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 23:10:55,970 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=36cfb92311c6e84a9b8aed595f98e6f3, ASSIGN}] 2023-07-24 23:10:55,972 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=36cfb92311c6e84a9b8aed595f98e6f3, ASSIGN 2023-07-24 23:10:55,973 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=36cfb92311c6e84a9b8aed595f98e6f3, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44771,1690240255146; forceNewPlan=false, retain=false 2023-07-24 23:10:56,033 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42745,1690240254672] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 23:10:56,035 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42745,1690240254672] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-24 23:10:56,037 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 23:10:56,038 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 23:10:56,039 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/.tmp/data/hbase/rsgroup/74fba4b5e736643fad81f5eef3c41f40 2023-07-24 23:10:56,040 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/.tmp/data/hbase/rsgroup/74fba4b5e736643fad81f5eef3c41f40 empty. 2023-07-24 23:10:56,040 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/.tmp/data/hbase/rsgroup/74fba4b5e736643fad81f5eef3c41f40 2023-07-24 23:10:56,040 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-24 23:10:56,054 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-24 23:10:56,055 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 74fba4b5e736643fad81f5eef3c41f40, NAME => 'hbase:rsgroup,,1690240256033.74fba4b5e736643fad81f5eef3c41f40.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/.tmp 2023-07-24 23:10:56,063 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690240256033.74fba4b5e736643fad81f5eef3c41f40.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:56,063 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 74fba4b5e736643fad81f5eef3c41f40, disabling compactions & flushes 2023-07-24 23:10:56,063 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690240256033.74fba4b5e736643fad81f5eef3c41f40. 2023-07-24 23:10:56,063 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690240256033.74fba4b5e736643fad81f5eef3c41f40. 2023-07-24 23:10:56,063 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690240256033.74fba4b5e736643fad81f5eef3c41f40. after waiting 0 ms 2023-07-24 23:10:56,063 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690240256033.74fba4b5e736643fad81f5eef3c41f40. 2023-07-24 23:10:56,063 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690240256033.74fba4b5e736643fad81f5eef3c41f40. 2023-07-24 23:10:56,063 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 74fba4b5e736643fad81f5eef3c41f40: 2023-07-24 23:10:56,066 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 23:10:56,067 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1690240256033.74fba4b5e736643fad81f5eef3c41f40.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690240256067"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240256067"}]},"ts":"1690240256067"} 2023-07-24 23:10:56,069 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 23:10:56,069 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 23:10:56,069 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690240256069"}]},"ts":"1690240256069"} 2023-07-24 23:10:56,070 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-24 23:10:56,074 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 23:10:56,074 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 23:10:56,074 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 23:10:56,074 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 23:10:56,074 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 23:10:56,075 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=74fba4b5e736643fad81f5eef3c41f40, ASSIGN}] 2023-07-24 23:10:56,075 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=74fba4b5e736643fad81f5eef3c41f40, ASSIGN 2023-07-24 23:10:56,076 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=74fba4b5e736643fad81f5eef3c41f40, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39455,1690240254990; forceNewPlan=false, retain=false 2023-07-24 23:10:56,076 INFO [jenkins-hbase4:42745] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-24 23:10:56,078 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=36cfb92311c6e84a9b8aed595f98e6f3, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44771,1690240255146 2023-07-24 23:10:56,078 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690240255918.36cfb92311c6e84a9b8aed595f98e6f3.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690240256078"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240256078"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240256078"}]},"ts":"1690240256078"} 2023-07-24 23:10:56,079 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=74fba4b5e736643fad81f5eef3c41f40, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39455,1690240254990 2023-07-24 23:10:56,079 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690240256033.74fba4b5e736643fad81f5eef3c41f40.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690240256079"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240256079"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240256079"}]},"ts":"1690240256079"} 2023-07-24 23:10:56,080 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=5, state=RUNNABLE; OpenRegionProcedure 36cfb92311c6e84a9b8aed595f98e6f3, server=jenkins-hbase4.apache.org,44771,1690240255146}] 2023-07-24 23:10:56,080 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 74fba4b5e736643fad81f5eef3c41f40, server=jenkins-hbase4.apache.org,39455,1690240254990}] 2023-07-24 23:10:56,203 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-24 23:10:56,233 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,39455,1690240254990 2023-07-24 23:10:56,233 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 23:10:56,235 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54056, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 23:10:56,239 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1690240255918.36cfb92311c6e84a9b8aed595f98e6f3. 2023-07-24 23:10:56,239 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 36cfb92311c6e84a9b8aed595f98e6f3, NAME => 'hbase:namespace,,1690240255918.36cfb92311c6e84a9b8aed595f98e6f3.', STARTKEY => '', ENDKEY => ''} 2023-07-24 23:10:56,239 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 36cfb92311c6e84a9b8aed595f98e6f3 2023-07-24 23:10:56,239 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690240255918.36cfb92311c6e84a9b8aed595f98e6f3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:56,239 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 36cfb92311c6e84a9b8aed595f98e6f3 2023-07-24 23:10:56,239 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 36cfb92311c6e84a9b8aed595f98e6f3 2023-07-24 23:10:56,240 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1690240256033.74fba4b5e736643fad81f5eef3c41f40. 2023-07-24 23:10:56,240 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 74fba4b5e736643fad81f5eef3c41f40, NAME => 'hbase:rsgroup,,1690240256033.74fba4b5e736643fad81f5eef3c41f40.', STARTKEY => '', ENDKEY => ''} 2023-07-24 23:10:56,241 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 23:10:56,241 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1690240256033.74fba4b5e736643fad81f5eef3c41f40. service=MultiRowMutationService 2023-07-24 23:10:56,241 INFO [StoreOpener-36cfb92311c6e84a9b8aed595f98e6f3-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 36cfb92311c6e84a9b8aed595f98e6f3 2023-07-24 23:10:56,241 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-24 23:10:56,241 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 74fba4b5e736643fad81f5eef3c41f40 2023-07-24 23:10:56,241 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690240256033.74fba4b5e736643fad81f5eef3c41f40.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:56,241 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 74fba4b5e736643fad81f5eef3c41f40 2023-07-24 23:10:56,241 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 74fba4b5e736643fad81f5eef3c41f40 2023-07-24 23:10:56,243 INFO [StoreOpener-74fba4b5e736643fad81f5eef3c41f40-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 74fba4b5e736643fad81f5eef3c41f40 2023-07-24 23:10:56,243 DEBUG [StoreOpener-36cfb92311c6e84a9b8aed595f98e6f3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/hbase/namespace/36cfb92311c6e84a9b8aed595f98e6f3/info 2023-07-24 23:10:56,244 DEBUG [StoreOpener-36cfb92311c6e84a9b8aed595f98e6f3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/hbase/namespace/36cfb92311c6e84a9b8aed595f98e6f3/info 2023-07-24 23:10:56,244 INFO [StoreOpener-36cfb92311c6e84a9b8aed595f98e6f3-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 36cfb92311c6e84a9b8aed595f98e6f3 columnFamilyName info 2023-07-24 23:10:56,244 INFO [StoreOpener-36cfb92311c6e84a9b8aed595f98e6f3-1] regionserver.HStore(310): Store=36cfb92311c6e84a9b8aed595f98e6f3/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:56,247 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/hbase/namespace/36cfb92311c6e84a9b8aed595f98e6f3 2023-07-24 23:10:56,247 DEBUG [StoreOpener-74fba4b5e736643fad81f5eef3c41f40-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/hbase/rsgroup/74fba4b5e736643fad81f5eef3c41f40/m 2023-07-24 23:10:56,247 DEBUG [StoreOpener-74fba4b5e736643fad81f5eef3c41f40-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/hbase/rsgroup/74fba4b5e736643fad81f5eef3c41f40/m 2023-07-24 23:10:56,247 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/hbase/namespace/36cfb92311c6e84a9b8aed595f98e6f3 2023-07-24 23:10:56,249 INFO [StoreOpener-74fba4b5e736643fad81f5eef3c41f40-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 74fba4b5e736643fad81f5eef3c41f40 columnFamilyName m 2023-07-24 23:10:56,249 INFO [StoreOpener-74fba4b5e736643fad81f5eef3c41f40-1] regionserver.HStore(310): Store=74fba4b5e736643fad81f5eef3c41f40/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:56,250 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/hbase/rsgroup/74fba4b5e736643fad81f5eef3c41f40 2023-07-24 23:10:56,250 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/hbase/rsgroup/74fba4b5e736643fad81f5eef3c41f40 2023-07-24 23:10:56,251 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 36cfb92311c6e84a9b8aed595f98e6f3 2023-07-24 23:10:56,253 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 74fba4b5e736643fad81f5eef3c41f40 2023-07-24 23:10:56,254 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/hbase/namespace/36cfb92311c6e84a9b8aed595f98e6f3/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 23:10:56,255 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 36cfb92311c6e84a9b8aed595f98e6f3; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9604445600, jitterRate=-0.10551629960536957}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 23:10:56,255 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 36cfb92311c6e84a9b8aed595f98e6f3: 2023-07-24 23:10:56,258 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1690240255918.36cfb92311c6e84a9b8aed595f98e6f3., pid=8, masterSystemTime=1690240256231 2023-07-24 23:10:56,259 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/hbase/rsgroup/74fba4b5e736643fad81f5eef3c41f40/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 23:10:56,260 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 74fba4b5e736643fad81f5eef3c41f40; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@30e8baf6, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 23:10:56,260 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 74fba4b5e736643fad81f5eef3c41f40: 2023-07-24 23:10:56,263 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1690240256033.74fba4b5e736643fad81f5eef3c41f40., pid=9, masterSystemTime=1690240256233 2023-07-24 23:10:56,265 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1690240255918.36cfb92311c6e84a9b8aed595f98e6f3. 2023-07-24 23:10:56,265 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1690240255918.36cfb92311c6e84a9b8aed595f98e6f3. 2023-07-24 23:10:56,265 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=36cfb92311c6e84a9b8aed595f98e6f3, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44771,1690240255146 2023-07-24 23:10:56,266 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1690240255918.36cfb92311c6e84a9b8aed595f98e6f3.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690240256265"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690240256265"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690240256265"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690240256265"}]},"ts":"1690240256265"} 2023-07-24 23:10:56,266 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1690240256033.74fba4b5e736643fad81f5eef3c41f40. 2023-07-24 23:10:56,267 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1690240256033.74fba4b5e736643fad81f5eef3c41f40. 2023-07-24 23:10:56,267 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=74fba4b5e736643fad81f5eef3c41f40, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39455,1690240254990 2023-07-24 23:10:56,267 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1690240256033.74fba4b5e736643fad81f5eef3c41f40.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690240256267"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690240256267"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690240256267"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690240256267"}]},"ts":"1690240256267"} 2023-07-24 23:10:56,270 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=5 2023-07-24 23:10:56,270 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=5, state=SUCCESS; OpenRegionProcedure 36cfb92311c6e84a9b8aed595f98e6f3, server=jenkins-hbase4.apache.org,44771,1690240255146 in 188 msec 2023-07-24 23:10:56,271 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-24 23:10:56,271 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 74fba4b5e736643fad81f5eef3c41f40, server=jenkins-hbase4.apache.org,39455,1690240254990 in 189 msec 2023-07-24 23:10:56,272 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-24 23:10:56,272 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=36cfb92311c6e84a9b8aed595f98e6f3, ASSIGN in 300 msec 2023-07-24 23:10:56,273 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=6 2023-07-24 23:10:56,273 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 23:10:56,273 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=74fba4b5e736643fad81f5eef3c41f40, ASSIGN in 196 msec 2023-07-24 23:10:56,274 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690240256274"}]},"ts":"1690240256274"} 2023-07-24 23:10:56,274 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 23:10:56,274 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690240256274"}]},"ts":"1690240256274"} 2023-07-24 23:10:56,275 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-24 23:10:56,276 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-24 23:10:56,277 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 23:10:56,279 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 23:10:56,280 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 359 msec 2023-07-24 23:10:56,280 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 246 msec 2023-07-24 23:10:56,321 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-24 23:10:56,322 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-24 23:10:56,322 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 23:10:56,327 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-24 23:10:56,337 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 23:10:56,338 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42745,1690240254672] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 23:10:56,341 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 13 msec 2023-07-24 23:10:56,341 INFO [RS-EventLoopGroup-14-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54060, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 23:10:56,343 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42745,1690240254672] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-24 23:10:56,343 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42745,1690240254672] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-24 23:10:56,349 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-24 23:10:56,353 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 23:10:56,353 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42745,1690240254672] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:56,356 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42745,1690240254672] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-24 23:10:56,359 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42745,1690240254672] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-24 23:10:56,360 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 23:10:56,363 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 13 msec 2023-07-24 23:10:56,375 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-24 23:10:56,377 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-24 23:10:56,377 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.085sec 2023-07-24 23:10:56,378 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-24 23:10:56,378 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-24 23:10:56,378 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-24 23:10:56,378 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42745,1690240254672-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-24 23:10:56,378 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42745,1690240254672-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-24 23:10:56,378 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-24 23:10:56,393 DEBUG [Listener at localhost/33659] zookeeper.ReadOnlyZKClient(139): Connect 0x5f686b78 to 127.0.0.1:61494 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 23:10:56,399 DEBUG [Listener at localhost/33659] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@688fd12e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 23:10:56,402 DEBUG [hconnection-0x6920aec5-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 23:10:56,404 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53958, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 23:10:56,406 INFO [Listener at localhost/33659] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,42745,1690240254672 2023-07-24 23:10:56,406 INFO [Listener at localhost/33659] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:56,409 DEBUG [Listener at localhost/33659] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-24 23:10:56,411 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36642, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-24 23:10:56,415 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-24 23:10:56,415 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 23:10:56,416 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-24 23:10:56,417 DEBUG [Listener at localhost/33659] zookeeper.ReadOnlyZKClient(139): Connect 0x58405c1f to 127.0.0.1:61494 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 23:10:56,422 DEBUG [Listener at localhost/33659] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@347a8754, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 23:10:56,423 INFO [Listener at localhost/33659] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:61494 2023-07-24 23:10:56,429 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 23:10:56,430 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x101999a05ba000a connected 2023-07-24 23:10:56,437 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:56,438 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:56,442 INFO [Listener at localhost/33659] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-24 23:10:56,460 INFO [Listener at localhost/33659] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 23:10:56,460 INFO [Listener at localhost/33659] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 23:10:56,460 INFO [Listener at localhost/33659] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 23:10:56,460 INFO [Listener at localhost/33659] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 23:10:56,461 INFO [Listener at localhost/33659] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 23:10:56,461 INFO [Listener at localhost/33659] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 23:10:56,461 INFO [Listener at localhost/33659] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 23:10:56,462 INFO [Listener at localhost/33659] ipc.NettyRpcServer(120): Bind to /172.31.14.131:42651 2023-07-24 23:10:56,462 INFO [Listener at localhost/33659] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 23:10:56,463 DEBUG [Listener at localhost/33659] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 23:10:56,464 INFO [Listener at localhost/33659] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 23:10:56,465 INFO [Listener at localhost/33659] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 23:10:56,466 INFO [Listener at localhost/33659] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:42651 connecting to ZooKeeper ensemble=127.0.0.1:61494 2023-07-24 23:10:56,475 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): regionserver:426510x0, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 23:10:56,477 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:42651-0x101999a05ba000b connected 2023-07-24 23:10:56,477 DEBUG [Listener at localhost/33659] zookeeper.ZKUtil(162): regionserver:42651-0x101999a05ba000b, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 23:10:56,478 DEBUG [Listener at localhost/33659] zookeeper.ZKUtil(162): regionserver:42651-0x101999a05ba000b, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-24 23:10:56,478 DEBUG [Listener at localhost/33659] zookeeper.ZKUtil(164): regionserver:42651-0x101999a05ba000b, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 23:10:56,479 DEBUG [Listener at localhost/33659] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42651 2023-07-24 23:10:56,479 DEBUG [Listener at localhost/33659] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42651 2023-07-24 23:10:56,480 DEBUG [Listener at localhost/33659] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42651 2023-07-24 23:10:56,482 DEBUG [Listener at localhost/33659] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42651 2023-07-24 23:10:56,483 DEBUG [Listener at localhost/33659] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42651 2023-07-24 23:10:56,484 INFO [Listener at localhost/33659] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 23:10:56,485 INFO [Listener at localhost/33659] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 23:10:56,485 INFO [Listener at localhost/33659] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 23:10:56,485 INFO [Listener at localhost/33659] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 23:10:56,485 INFO [Listener at localhost/33659] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 23:10:56,485 INFO [Listener at localhost/33659] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 23:10:56,485 INFO [Listener at localhost/33659] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 23:10:56,486 INFO [Listener at localhost/33659] http.HttpServer(1146): Jetty bound to port 45187 2023-07-24 23:10:56,486 INFO [Listener at localhost/33659] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 23:10:56,489 INFO [Listener at localhost/33659] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 23:10:56,489 INFO [Listener at localhost/33659] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7b465921{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/hadoop.log.dir/,AVAILABLE} 2023-07-24 23:10:56,489 INFO [Listener at localhost/33659] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 23:10:56,490 INFO [Listener at localhost/33659] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@39e57d0b{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 23:10:56,605 INFO [Listener at localhost/33659] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 23:10:56,605 INFO [Listener at localhost/33659] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 23:10:56,606 INFO [Listener at localhost/33659] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 23:10:56,606 INFO [Listener at localhost/33659] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 23:10:56,607 INFO [Listener at localhost/33659] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 23:10:56,608 INFO [Listener at localhost/33659] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@7cbca702{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/java.io.tmpdir/jetty-0_0_0_0-45187-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8338784444850102743/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 23:10:56,609 INFO [Listener at localhost/33659] server.AbstractConnector(333): Started ServerConnector@a0cb217{HTTP/1.1, (http/1.1)}{0.0.0.0:45187} 2023-07-24 23:10:56,610 INFO [Listener at localhost/33659] server.Server(415): Started @43913ms 2023-07-24 23:10:56,612 INFO [RS:3;jenkins-hbase4:42651] regionserver.HRegionServer(951): ClusterId : 002b9645-9242-4f1f-8283-3b07dad32c94 2023-07-24 23:10:56,612 DEBUG [RS:3;jenkins-hbase4:42651] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 23:10:56,614 DEBUG [RS:3;jenkins-hbase4:42651] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 23:10:56,614 DEBUG [RS:3;jenkins-hbase4:42651] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 23:10:56,616 DEBUG [RS:3;jenkins-hbase4:42651] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 23:10:56,617 DEBUG [RS:3;jenkins-hbase4:42651] zookeeper.ReadOnlyZKClient(139): Connect 0x0c8a098c to 127.0.0.1:61494 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 23:10:56,622 DEBUG [RS:3;jenkins-hbase4:42651] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@24df66d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 23:10:56,622 DEBUG [RS:3;jenkins-hbase4:42651] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@67262fc, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 23:10:56,631 DEBUG [RS:3;jenkins-hbase4:42651] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:42651 2023-07-24 23:10:56,631 INFO [RS:3;jenkins-hbase4:42651] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 23:10:56,631 INFO [RS:3;jenkins-hbase4:42651] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 23:10:56,631 DEBUG [RS:3;jenkins-hbase4:42651] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 23:10:56,631 INFO [RS:3;jenkins-hbase4:42651] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42745,1690240254672 with isa=jenkins-hbase4.apache.org/172.31.14.131:42651, startcode=1690240256459 2023-07-24 23:10:56,631 DEBUG [RS:3;jenkins-hbase4:42651] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 23:10:56,634 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35093, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 23:10:56,634 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42745] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,42651,1690240256459 2023-07-24 23:10:56,634 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42745,1690240254672] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 23:10:56,635 DEBUG [RS:3;jenkins-hbase4:42651] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f 2023-07-24 23:10:56,635 DEBUG [RS:3;jenkins-hbase4:42651] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:34031 2023-07-24 23:10:56,635 DEBUG [RS:3;jenkins-hbase4:42651] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43107 2023-07-24 23:10:56,640 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): regionserver:44771-0x101999a05ba0003, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 23:10:56,640 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): regionserver:39455-0x101999a05ba0002, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 23:10:56,640 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 23:10:56,640 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): regionserver:34441-0x101999a05ba0001, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 23:10:56,640 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42745,1690240254672] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:56,641 DEBUG [RS:3;jenkins-hbase4:42651] zookeeper.ZKUtil(162): regionserver:42651-0x101999a05ba000b, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42651,1690240256459 2023-07-24 23:10:56,641 WARN [RS:3;jenkins-hbase4:42651] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 23:10:56,641 INFO [RS:3;jenkins-hbase4:42651] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 23:10:56,641 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42745,1690240254672] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-24 23:10:56,641 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,42651,1690240256459] 2023-07-24 23:10:56,641 DEBUG [RS:3;jenkins-hbase4:42651] regionserver.HRegionServer(1948): logDir=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/WALs/jenkins-hbase4.apache.org,42651,1690240256459 2023-07-24 23:10:56,642 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44771-0x101999a05ba0003, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44771,1690240255146 2023-07-24 23:10:56,642 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34441-0x101999a05ba0001, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44771,1690240255146 2023-07-24 23:10:56,644 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42745,1690240254672] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-24 23:10:56,644 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44771-0x101999a05ba0003, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34441,1690240254844 2023-07-24 23:10:56,644 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34441-0x101999a05ba0001, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34441,1690240254844 2023-07-24 23:10:56,644 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39455-0x101999a05ba0002, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44771,1690240255146 2023-07-24 23:10:56,647 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44771-0x101999a05ba0003, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39455,1690240254990 2023-07-24 23:10:56,647 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34441-0x101999a05ba0001, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39455,1690240254990 2023-07-24 23:10:56,648 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39455-0x101999a05ba0002, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34441,1690240254844 2023-07-24 23:10:56,648 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44771-0x101999a05ba0003, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42651,1690240256459 2023-07-24 23:10:56,648 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34441-0x101999a05ba0001, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42651,1690240256459 2023-07-24 23:10:56,648 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39455-0x101999a05ba0002, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39455,1690240254990 2023-07-24 23:10:56,648 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39455-0x101999a05ba0002, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42651,1690240256459 2023-07-24 23:10:56,649 DEBUG [RS:3;jenkins-hbase4:42651] zookeeper.ZKUtil(162): regionserver:42651-0x101999a05ba000b, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44771,1690240255146 2023-07-24 23:10:56,649 DEBUG [RS:3;jenkins-hbase4:42651] zookeeper.ZKUtil(162): regionserver:42651-0x101999a05ba000b, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34441,1690240254844 2023-07-24 23:10:56,649 DEBUG [RS:3;jenkins-hbase4:42651] zookeeper.ZKUtil(162): regionserver:42651-0x101999a05ba000b, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39455,1690240254990 2023-07-24 23:10:56,650 DEBUG [RS:3;jenkins-hbase4:42651] zookeeper.ZKUtil(162): regionserver:42651-0x101999a05ba000b, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42651,1690240256459 2023-07-24 23:10:56,650 DEBUG [RS:3;jenkins-hbase4:42651] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 23:10:56,651 INFO [RS:3;jenkins-hbase4:42651] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 23:10:56,652 INFO [RS:3;jenkins-hbase4:42651] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 23:10:56,652 INFO [RS:3;jenkins-hbase4:42651] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 23:10:56,652 INFO [RS:3;jenkins-hbase4:42651] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:56,652 INFO [RS:3;jenkins-hbase4:42651] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 23:10:56,654 INFO [RS:3;jenkins-hbase4:42651] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:56,654 DEBUG [RS:3;jenkins-hbase4:42651] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:56,654 DEBUG [RS:3;jenkins-hbase4:42651] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:56,654 DEBUG [RS:3;jenkins-hbase4:42651] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:56,655 DEBUG [RS:3;jenkins-hbase4:42651] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:56,655 DEBUG [RS:3;jenkins-hbase4:42651] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:56,655 DEBUG [RS:3;jenkins-hbase4:42651] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 23:10:56,655 DEBUG [RS:3;jenkins-hbase4:42651] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:56,655 DEBUG [RS:3;jenkins-hbase4:42651] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:56,655 DEBUG [RS:3;jenkins-hbase4:42651] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:56,655 DEBUG [RS:3;jenkins-hbase4:42651] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 23:10:56,656 INFO [RS:3;jenkins-hbase4:42651] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:56,656 INFO [RS:3;jenkins-hbase4:42651] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:56,656 INFO [RS:3;jenkins-hbase4:42651] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:56,668 INFO [RS:3;jenkins-hbase4:42651] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 23:10:56,668 INFO [RS:3;jenkins-hbase4:42651] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42651,1690240256459-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 23:10:56,679 INFO [RS:3;jenkins-hbase4:42651] regionserver.Replication(203): jenkins-hbase4.apache.org,42651,1690240256459 started 2023-07-24 23:10:56,679 INFO [RS:3;jenkins-hbase4:42651] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,42651,1690240256459, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:42651, sessionid=0x101999a05ba000b 2023-07-24 23:10:56,679 DEBUG [RS:3;jenkins-hbase4:42651] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 23:10:56,679 DEBUG [RS:3;jenkins-hbase4:42651] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,42651,1690240256459 2023-07-24 23:10:56,679 DEBUG [RS:3;jenkins-hbase4:42651] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42651,1690240256459' 2023-07-24 23:10:56,679 DEBUG [RS:3;jenkins-hbase4:42651] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 23:10:56,679 DEBUG [RS:3;jenkins-hbase4:42651] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 23:10:56,680 DEBUG [RS:3;jenkins-hbase4:42651] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 23:10:56,680 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 23:10:56,680 DEBUG [RS:3;jenkins-hbase4:42651] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 23:10:56,680 DEBUG [RS:3;jenkins-hbase4:42651] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,42651,1690240256459 2023-07-24 23:10:56,680 DEBUG [RS:3;jenkins-hbase4:42651] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42651,1690240256459' 2023-07-24 23:10:56,680 DEBUG [RS:3;jenkins-hbase4:42651] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 23:10:56,680 DEBUG [RS:3;jenkins-hbase4:42651] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 23:10:56,680 DEBUG [RS:3;jenkins-hbase4:42651] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 23:10:56,680 INFO [RS:3;jenkins-hbase4:42651] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 23:10:56,681 INFO [RS:3;jenkins-hbase4:42651] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 23:10:56,682 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:56,682 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:56,683 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 23:10:56,685 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 23:10:56,686 DEBUG [hconnection-0x1d2ab8cd-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 23:10:56,688 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53968, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 23:10:56,692 DEBUG [hconnection-0x1d2ab8cd-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 23:10:56,694 INFO [RS-EventLoopGroup-14-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54066, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 23:10:56,695 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:56,695 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:56,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42745] to rsgroup master 2023-07-24 23:10:56,698 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42745 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 23:10:56,698 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:36642 deadline: 1690241456698, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42745 is either offline or it does not exist. 2023-07-24 23:10:56,699 WARN [Listener at localhost/33659] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42745 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42745 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 23:10:56,700 INFO [Listener at localhost/33659] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:56,701 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:56,701 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:56,701 INFO [Listener at localhost/33659] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34441, jenkins-hbase4.apache.org:39455, jenkins-hbase4.apache.org:42651, jenkins-hbase4.apache.org:44771], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 23:10:56,702 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 23:10:56,702 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:56,757 INFO [Listener at localhost/33659] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=562 (was 514) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@5118434f java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:451) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/33659-SendThread(127.0.0.1:61494) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server handler 1 on default port 33659 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: hconnection-0x1d2ab8cd-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@2fff86e0 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-547-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=44771 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=44771 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-14-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 36475 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1911701812-2298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1311363767-2269 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (360695303) connection to localhost/127.0.0.1:36591 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: hconnection-0xcc39f4c-metaLookup-shared--pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:39455Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-553-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xcc39f4c-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1168938369@qtp-1940419669-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40273 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=44771 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/cluster_f5253f4f-42b6-b460-06c2-48044aa3ed75/dfs/data/data4) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: Listener at localhost/33659.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=42745 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x6920aec5-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-716694667_17 at /127.0.0.1:34776 [Receiving block BP-106711482-172.31.14.131-1690240253959:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 324936478@qtp-92584973-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33221 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@71e0914d java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:244) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-25 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: PacketResponder: BP-106711482-172.31.14.131-1690240253959:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp135279555-2236 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost:34031 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (360695303) connection to localhost/127.0.0.1:34031 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Server idle connection scanner for port 45209 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61494@0x1162d826-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Server handler 4 on default port 45209 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61494@0x58405c1f-SendThread(127.0.0.1:61494) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server handler 1 on default port 34031 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: hconnection-0xcc39f4c-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp752824840-2572 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1666930596.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-10620651_17 at /127.0.0.1:47646 [Receiving block BP-106711482-172.31.14.131-1690240253959:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp135279555-2234 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1666930596.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 34031 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39455 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-31 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/cluster_f5253f4f-42b6-b460-06c2-48044aa3ed75/dfs/data/data6) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@124c5273 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61494@0x5a33b819-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Client (360695303) connection to localhost/127.0.0.1:34031 from jenkins.hfs.7 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp121429412-2310 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 45209 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:42651Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp905277375-2203 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1666930596.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f-prefix:jenkins-hbase4.apache.org,34441,1690240254844 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (360695303) connection to localhost/127.0.0.1:34031 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/33659-SendThread(127.0.0.1:61494) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server idle connection scanner for port 34031 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp905277375-2206 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (360695303) connection to localhost/127.0.0.1:36591 from jenkins.hfs.5 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690240255446 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: Listener at localhost/33659-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34441 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 0 on default port 45209 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/cluster_f5253f4f-42b6-b460-06c2-48044aa3ed75/dfs/data/data1) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: PacketResponder: BP-106711482-172.31.14.131-1690240253959:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=42745 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@532705e5[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp121429412-2309-acceptor-0@2ff7d9ec-ServerConnector@9fb3333{HTTP/1.1, (http/1.1)}{0.0.0.0:37817} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61494@0x0c8a098c-SendThread(127.0.0.1:61494) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42651 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35803,1690240249288 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/cluster_f5253f4f-42b6-b460-06c2-48044aa3ed75/dfs/data/data6/current/BP-106711482-172.31.14.131-1690240253959 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-546-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-32 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Listener at localhost/33659-SendThread(127.0.0.1:61494) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: PacketResponder: BP-106711482-172.31.14.131-1690240253959:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=42651 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-27 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost:36591 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@abdc5a1 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:528) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-716694667_17 at /127.0.0.1:34750 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:44771Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/33659-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1981011103_17 at /127.0.0.1:34814 [Receiving block BP-106711482-172.31.14.131-1690240253959:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-10620651_17 at /127.0.0.1:51470 [Receiving block BP-106711482-172.31.14.131-1690240253959:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=42651 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61494@0x7be39a23 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/36386179.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 731707017@qtp-1440743879-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: RS:1;jenkins-hbase4:39455 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/33659-SendThread(127.0.0.1:61494) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=42651 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56120@0x0fb15e46-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42651 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 2 on default port 33659 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Server handler 3 on default port 36475 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=34441 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_64227592_17 at /127.0.0.1:47632 [Receiving block BP-106711482-172.31.14.131-1690240253959:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61494@0x1162d826 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/36386179.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:34441Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1311363767-2264 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1666930596.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp121429412-2305 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1666930596.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-582f038c-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-33 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp135279555-2239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 45209 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=42745 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost:36591 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-18-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp135279555-2237 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-10620651_17 at /127.0.0.1:34832 [Receiving block BP-106711482-172.31.14.131-1690240253959:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_64227592_17 at /127.0.0.1:51462 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp905277375-2204-acceptor-0@37591d88-ServerConnector@18478842{HTTP/1.1, (http/1.1)}{0.0.0.0:43107} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/cluster_f5253f4f-42b6-b460-06c2-48044aa3ed75/dfs/data/data3/current/BP-106711482-172.31.14.131-1690240253959 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@47973bc4 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=39455 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp752824840-2575 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1311363767-2265-acceptor-0@267fc865-ServerConnector@81fc2e1{HTTP/1.1, (http/1.1)}{0.0.0.0:40261} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp752824840-2574 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (360695303) connection to localhost/127.0.0.1:36591 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42745,1690240254672 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44771 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:0;jenkins-hbase4:34441-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-14-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61494@0x0fc2a32e-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=34441 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_64227592_17 at /127.0.0.1:51452 [Receiving block BP-106711482-172.31.14.131-1690240253959:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ProcessThread(sid:0 cport:61494): sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:134) Potentially hanging thread: Listener at localhost/36721-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/cluster_f5253f4f-42b6-b460-06c2-48044aa3ed75/dfs/data/data3) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: M:0;jenkins-hbase4:42745 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-26 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp905277375-2205 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1311363767-2270 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp905277375-2208 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=44771 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61494@0x0c8a098c sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/36386179.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61494@0x5f686b78-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Server handler 3 on default port 34031 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61494@0x0fc2a32e sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/36386179.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-716694667_17 at /127.0.0.1:51402 [Receiving block BP-106711482-172.31.14.131-1690240253959:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=42651 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp752824840-2577 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/33659 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-106711482-172.31.14.131-1690240253959:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/cluster_f5253f4f-42b6-b460-06c2-48044aa3ed75/dfs/data/data5) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 847254375@qtp-2144072995-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: qtp752824840-2573-acceptor-0@642b5c45-ServerConnector@a0cb217{HTTP/1.1, (http/1.1)}{0.0.0.0:45187} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1665248954@qtp-2144072995-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41879 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: qtp1911701812-2294 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1666930596.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-10620651_17 at /127.0.0.1:51436 [Receiving block BP-106711482-172.31.14.131-1690240253959:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-4f65586a-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/cluster_f5253f4f-42b6-b460-06c2-48044aa3ed75/dfs/data/data2/current/BP-106711482-172.31.14.131-1690240253959 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@53a9293c java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=42651 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/33659-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Server idle connection scanner for port 33659 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: pool-541-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (360695303) connection to localhost/127.0.0.1:36591 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34441 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp752824840-2578 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost:36591 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-24 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f-prefix:jenkins-hbase4.apache.org,39455,1690240254990 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=39455 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61494@0x7be39a23-SendThread(127.0.0.1:61494) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=34441 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/33659-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39455 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 3 on default port 33659 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Listener at localhost/33659-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: PacketResponder: BP-106711482-172.31.14.131-1690240253959:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61494@0x58405c1f sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/36386179.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost:34031 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=42745 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 1 on default port 36475 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: pool-552-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/33659-SendThread(127.0.0.1:61494) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: nioEventLoopGroup-16-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61494@0x1162d826-SendThread(127.0.0.1:61494) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: pool-566-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@6535fcdb[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/33659-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=34441 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (360695303) connection to localhost/127.0.0.1:36591 from jenkins.hfs.6 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp1911701812-2295-acceptor-0@1b6b8af6-ServerConnector@3095489{HTTP/1.1, (http/1.1)}{0.0.0.0:37333} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-34 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@1624dfad java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3884) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61494@0x0c8a098c-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-716694667_17 at /127.0.0.1:47602 [Receiving block BP-106711482-172.31.14.131-1690240253959:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/33659-SendThread(127.0.0.1:61494) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1311363767-2267 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/MasterData-prefix:jenkins-hbase4.apache.org,42745,1690240254672 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp135279555-2238 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56120@0x0fb15e46 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/36386179.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=42745 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp752824840-2576 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (360695303) connection to localhost/127.0.0.1:34031 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS:2;jenkins-hbase4:44771 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp121429412-2306 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1666930596.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690240255446 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: DataNode DiskChecker thread 1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/cluster_f5253f4f-42b6-b460-06c2-48044aa3ed75/dfs/data/data2) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: qtp121429412-2308 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1666930596.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp121429412-2311 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=39455 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp121429412-2307 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1666930596.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (360695303) connection to localhost/127.0.0.1:34031 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-35 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1311363767-2268 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1911701812-2300 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34441 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-557-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp905277375-2207 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-106711482-172.31.14.131-1690240253959:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@15c598ed[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-7678f4bd-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 36475 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-28 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@233df80a sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39455 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp135279555-2235-acceptor-0@59f63ede-ServerConnector@473a089d{HTTP/1.1, (http/1.1)}{0.0.0.0:35873} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=44771 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-15 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-30f8f21e-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61494@0x58405c1f-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1981011103_17 at /127.0.0.1:47624 [Receiving block BP-106711482-172.31.14.131-1690240253959:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-106711482-172.31.14.131-1690240253959:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 264680736@qtp-1940419669-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: qtp1311363767-2266 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1911701812-2299 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-106711482-172.31.14.131-1690240253959 heartbeating to localhost/127.0.0.1:34031 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-14 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1911701812-2297 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-106711482-172.31.14.131-1690240253959:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:42651-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f-prefix:jenkins-hbase4.apache.org,44771,1690240255146 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-106711482-172.31.14.131-1690240253959:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:36591 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-3-worker-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=34441 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 0 on default port 34031 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Listener at localhost/36721-SendThread(127.0.0.1:56120) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:369) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137) Potentially hanging thread: qtp752824840-2579 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:42651 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=44771 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@3d85fcf4 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3975) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: NIOServerCxnFactory.AcceptThread:localhost/127.0.0.1:61494 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.select(NIOServerCnxnFactory.java:229) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.run(NIOServerCnxnFactory.java:205) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61494@0x5f686b78-SendThread(127.0.0.1:61494) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 33659 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Session-HouseKeeper-29ad437f-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 260928658@qtp-92584973-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44771 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=39455 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-106711482-172.31.14.131-1690240253959:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (360695303) connection to localhost/127.0.0.1:34031 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Server handler 4 on default port 34031 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: PacketResponder: BP-106711482-172.31.14.131-1690240253959:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xcc39f4c-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xcc39f4c-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61494@0x5f686b78 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/36386179.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp121429412-2312 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=42745 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/cluster_f5253f4f-42b6-b460-06c2-48044aa3ed75/dfs/data/data4/current/BP-106711482-172.31.14.131-1690240253959 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42651 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=34441 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1911701812-2301 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=39455 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_64227592_17 at /127.0.0.1:34822 [Receiving block BP-106711482-172.31.14.131-1690240253959:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44771 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-561-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 36475 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 4 on default port 36475 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: PacketResponder: BP-106711482-172.31.14.131-1690240253959:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-106711482-172.31.14.131-1690240253959:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: CacheReplicationMonitor(150887328) sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1981011103_17 at /127.0.0.1:51448 [Receiving block BP-106711482-172.31.14.131-1690240253959:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-106711482-172.31.14.131-1690240253959 heartbeating to localhost/127.0.0.1:34031 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-30 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-16 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1911701812-2296 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=42651 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=34441 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@43da5466 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:42745 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: Listener at localhost/33659.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xcc39f4c-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/cluster_f5253f4f-42b6-b460-06c2-48044aa3ed75/dfs/data/data1/current/BP-106711482-172.31.14.131-1690240253959 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42745 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-10620651_17 at /127.0.0.1:47620 [Receiving block BP-106711482-172.31.14.131-1690240253959:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61494@0x5a33b819-SendThread(127.0.0.1:61494) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: PacketResponder: BP-106711482-172.31.14.131-1690240253959:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp905277375-2210 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=42745 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-10620651_17 at /127.0.0.1:34806 [Receiving block BP-106711482-172.31.14.131-1690240253959:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f-prefix:jenkins-hbase4.apache.org,44771,1690240255146.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/33659.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: 1783983577@qtp-1440743879-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45613 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RS:1;jenkins-hbase4:39455-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-562-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-716694667_17 at /127.0.0.1:47562 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 33659 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=42651 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost:34031 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-548-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61494@0x7be39a23-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61494@0x0fc2a32e-SendThread(127.0.0.1:61494) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=39455 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-543-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-29 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp135279555-2241 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@1a086f0f java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3842) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56120@0x0fb15e46-SendThread(127.0.0.1:56120) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1072) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1139) Potentially hanging thread: Listener at localhost/33659.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: RS:2;jenkins-hbase4:44771-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@e44fbd2 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=44771 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-13 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xcc39f4c-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-106711482-172.31.14.131-1690240253959:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp905277375-2209 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:34441 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-106711482-172.31.14.131-1690240253959 heartbeating to localhost/127.0.0.1:34031 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x1d2ab8cd-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39455 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 2 on default port 45209 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: hconnection-0xcc39f4c-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42745 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1311363767-2271 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61494@0x5a33b819 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/36386179.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:34031 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp135279555-2240 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/cluster_f5253f4f-42b6-b460-06c2-48044aa3ed75/dfs/data/data5/current/BP-106711482-172.31.14.131-1690240253959 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=837 (was 792) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=305 (was 342), ProcessCount=175 (was 177), AvailableMemoryMB=7599 (was 5744) - AvailableMemoryMB LEAK? - 2023-07-24 23:10:56,760 WARN [Listener at localhost/33659] hbase.ResourceChecker(130): Thread=562 is superior to 500 2023-07-24 23:10:56,777 INFO [Listener at localhost/33659] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=562, OpenFileDescriptor=837, MaxFileDescriptor=60000, SystemLoadAverage=305, ProcessCount=175, AvailableMemoryMB=7598 2023-07-24 23:10:56,777 WARN [Listener at localhost/33659] hbase.ResourceChecker(130): Thread=562 is superior to 500 2023-07-24 23:10:56,777 INFO [Listener at localhost/33659] rsgroup.TestRSGroupsBase(132): testNotMoveTableToNullRSGroupWhenCreatingExistingTable 2023-07-24 23:10:56,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:56,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:56,782 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 23:10:56,782 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 23:10:56,782 INFO [RS:3;jenkins-hbase4:42651] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42651%2C1690240256459, suffix=, logDir=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/WALs/jenkins-hbase4.apache.org,42651,1690240256459, archiveDir=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/oldWALs, maxLogs=32 2023-07-24 23:10:56,782 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 23:10:56,784 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 23:10:56,784 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 23:10:56,785 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 23:10:56,788 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:56,788 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 23:10:56,790 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 23:10:56,792 INFO [Listener at localhost/33659] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 23:10:56,793 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 23:10:56,795 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:56,795 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:56,797 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 23:10:56,798 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 23:10:56,801 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:56,801 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:56,806 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36005,DS-16b8c1ad-1fee-4267-a812-e5241a0ea2c8,DISK] 2023-07-24 23:10:56,806 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39601,DS-f237c4b5-7826-4e6a-ab1f-b40f6e113a72,DISK] 2023-07-24 23:10:56,807 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36877,DS-89860324-f117-4671-8197-befa8c84a26c,DISK] 2023-07-24 23:10:56,809 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42745] to rsgroup master 2023-07-24 23:10:56,809 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42745 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 23:10:56,809 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] ipc.CallRunner(144): callId: 48 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:36642 deadline: 1690241456809, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42745 is either offline or it does not exist. 2023-07-24 23:10:56,810 INFO [RS:3;jenkins-hbase4:42651] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/WALs/jenkins-hbase4.apache.org,42651,1690240256459/jenkins-hbase4.apache.org%2C42651%2C1690240256459.1690240256783 2023-07-24 23:10:56,810 WARN [Listener at localhost/33659] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42745 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42745 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 23:10:56,811 DEBUG [RS:3;jenkins-hbase4:42651] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36005,DS-16b8c1ad-1fee-4267-a812-e5241a0ea2c8,DISK], DatanodeInfoWithStorage[127.0.0.1:39601,DS-f237c4b5-7826-4e6a-ab1f-b40f6e113a72,DISK], DatanodeInfoWithStorage[127.0.0.1:36877,DS-89860324-f117-4671-8197-befa8c84a26c,DISK]] 2023-07-24 23:10:56,811 INFO [Listener at localhost/33659] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:56,812 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:56,812 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:56,812 INFO [Listener at localhost/33659] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34441, jenkins-hbase4.apache.org:39455, jenkins-hbase4.apache.org:42651, jenkins-hbase4.apache.org:44771], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 23:10:56,813 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 23:10:56,813 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:56,814 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 23:10:56,815 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-24 23:10:56,816 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 23:10:56,816 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "t1" procId is: 12 2023-07-24 23:10:56,817 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-24 23:10:56,818 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:56,818 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:56,818 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 23:10:56,820 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 23:10:56,821 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/.tmp/data/default/t1/1a7d83851dbfb8b95fe7dafd46d90ed8 2023-07-24 23:10:56,822 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/.tmp/data/default/t1/1a7d83851dbfb8b95fe7dafd46d90ed8 empty. 2023-07-24 23:10:56,822 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/.tmp/data/default/t1/1a7d83851dbfb8b95fe7dafd46d90ed8 2023-07-24 23:10:56,822 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-24 23:10:56,834 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/.tmp/data/default/t1/.tabledesc/.tableinfo.0000000001 2023-07-24 23:10:56,836 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 1a7d83851dbfb8b95fe7dafd46d90ed8, NAME => 't1,,1690240256814.1a7d83851dbfb8b95fe7dafd46d90ed8.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='t1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/.tmp 2023-07-24 23:10:56,844 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(866): Instantiated t1,,1690240256814.1a7d83851dbfb8b95fe7dafd46d90ed8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:56,844 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1604): Closing 1a7d83851dbfb8b95fe7dafd46d90ed8, disabling compactions & flushes 2023-07-24 23:10:56,844 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1626): Closing region t1,,1690240256814.1a7d83851dbfb8b95fe7dafd46d90ed8. 2023-07-24 23:10:56,845 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1690240256814.1a7d83851dbfb8b95fe7dafd46d90ed8. 2023-07-24 23:10:56,845 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1714): Acquired close lock on t1,,1690240256814.1a7d83851dbfb8b95fe7dafd46d90ed8. after waiting 0 ms 2023-07-24 23:10:56,845 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1724): Updates disabled for region t1,,1690240256814.1a7d83851dbfb8b95fe7dafd46d90ed8. 2023-07-24 23:10:56,845 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1838): Closed t1,,1690240256814.1a7d83851dbfb8b95fe7dafd46d90ed8. 2023-07-24 23:10:56,845 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1558): Region close journal for 1a7d83851dbfb8b95fe7dafd46d90ed8: 2023-07-24 23:10:56,853 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 23:10:56,855 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"t1,,1690240256814.1a7d83851dbfb8b95fe7dafd46d90ed8.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690240256855"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240256855"}]},"ts":"1690240256855"} 2023-07-24 23:10:56,859 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 23:10:56,860 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 23:10:56,861 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690240256860"}]},"ts":"1690240256860"} 2023-07-24 23:10:56,862 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLING in hbase:meta 2023-07-24 23:10:56,872 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 23:10:56,872 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 23:10:56,872 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 23:10:56,872 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 23:10:56,873 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-24 23:10:56,873 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 23:10:56,873 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=1a7d83851dbfb8b95fe7dafd46d90ed8, ASSIGN}] 2023-07-24 23:10:56,874 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=1a7d83851dbfb8b95fe7dafd46d90ed8, ASSIGN 2023-07-24 23:10:56,875 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=t1, region=1a7d83851dbfb8b95fe7dafd46d90ed8, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34441,1690240254844; forceNewPlan=false, retain=false 2023-07-24 23:10:56,919 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-24 23:10:57,026 INFO [jenkins-hbase4:42745] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 23:10:57,027 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=1a7d83851dbfb8b95fe7dafd46d90ed8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34441,1690240254844 2023-07-24 23:10:57,027 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1690240256814.1a7d83851dbfb8b95fe7dafd46d90ed8.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690240257027"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240257027"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240257027"}]},"ts":"1690240257027"} 2023-07-24 23:10:57,029 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; OpenRegionProcedure 1a7d83851dbfb8b95fe7dafd46d90ed8, server=jenkins-hbase4.apache.org,34441,1690240254844}] 2023-07-24 23:10:57,120 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-24 23:10:57,182 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,34441,1690240254844 2023-07-24 23:10:57,182 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 23:10:57,184 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54050, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 23:10:57,188 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open t1,,1690240256814.1a7d83851dbfb8b95fe7dafd46d90ed8. 2023-07-24 23:10:57,188 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1a7d83851dbfb8b95fe7dafd46d90ed8, NAME => 't1,,1690240256814.1a7d83851dbfb8b95fe7dafd46d90ed8.', STARTKEY => '', ENDKEY => ''} 2023-07-24 23:10:57,188 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table t1 1a7d83851dbfb8b95fe7dafd46d90ed8 2023-07-24 23:10:57,188 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated t1,,1690240256814.1a7d83851dbfb8b95fe7dafd46d90ed8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 23:10:57,188 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1a7d83851dbfb8b95fe7dafd46d90ed8 2023-07-24 23:10:57,188 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1a7d83851dbfb8b95fe7dafd46d90ed8 2023-07-24 23:10:57,190 INFO [StoreOpener-1a7d83851dbfb8b95fe7dafd46d90ed8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf1 of region 1a7d83851dbfb8b95fe7dafd46d90ed8 2023-07-24 23:10:57,191 DEBUG [StoreOpener-1a7d83851dbfb8b95fe7dafd46d90ed8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/default/t1/1a7d83851dbfb8b95fe7dafd46d90ed8/cf1 2023-07-24 23:10:57,191 DEBUG [StoreOpener-1a7d83851dbfb8b95fe7dafd46d90ed8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/default/t1/1a7d83851dbfb8b95fe7dafd46d90ed8/cf1 2023-07-24 23:10:57,191 INFO [StoreOpener-1a7d83851dbfb8b95fe7dafd46d90ed8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1a7d83851dbfb8b95fe7dafd46d90ed8 columnFamilyName cf1 2023-07-24 23:10:57,192 INFO [StoreOpener-1a7d83851dbfb8b95fe7dafd46d90ed8-1] regionserver.HStore(310): Store=1a7d83851dbfb8b95fe7dafd46d90ed8/cf1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 23:10:57,193 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/default/t1/1a7d83851dbfb8b95fe7dafd46d90ed8 2023-07-24 23:10:57,193 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/default/t1/1a7d83851dbfb8b95fe7dafd46d90ed8 2023-07-24 23:10:57,195 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1a7d83851dbfb8b95fe7dafd46d90ed8 2023-07-24 23:10:57,198 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/default/t1/1a7d83851dbfb8b95fe7dafd46d90ed8/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 23:10:57,198 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1a7d83851dbfb8b95fe7dafd46d90ed8; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9827296160, jitterRate=-0.08476172387599945}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 23:10:57,199 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1a7d83851dbfb8b95fe7dafd46d90ed8: 2023-07-24 23:10:57,199 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for t1,,1690240256814.1a7d83851dbfb8b95fe7dafd46d90ed8., pid=14, masterSystemTime=1690240257182 2023-07-24 23:10:57,203 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for t1,,1690240256814.1a7d83851dbfb8b95fe7dafd46d90ed8. 2023-07-24 23:10:57,204 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened t1,,1690240256814.1a7d83851dbfb8b95fe7dafd46d90ed8. 2023-07-24 23:10:57,204 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=1a7d83851dbfb8b95fe7dafd46d90ed8, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34441,1690240254844 2023-07-24 23:10:57,205 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"t1,,1690240256814.1a7d83851dbfb8b95fe7dafd46d90ed8.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690240257204"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690240257204"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690240257204"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690240257204"}]},"ts":"1690240257204"} 2023-07-24 23:10:57,208 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-07-24 23:10:57,208 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; OpenRegionProcedure 1a7d83851dbfb8b95fe7dafd46d90ed8, server=jenkins-hbase4.apache.org,34441,1690240254844 in 177 msec 2023-07-24 23:10:57,209 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-24 23:10:57,210 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=t1, region=1a7d83851dbfb8b95fe7dafd46d90ed8, ASSIGN in 335 msec 2023-07-24 23:10:57,211 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 23:10:57,211 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690240257211"}]},"ts":"1690240257211"} 2023-07-24 23:10:57,212 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLED in hbase:meta 2023-07-24 23:10:57,215 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 23:10:57,216 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=t1 in 401 msec 2023-07-24 23:10:57,421 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-24 23:10:57,421 INFO [Listener at localhost/33659] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:t1, procId: 12 completed 2023-07-24 23:10:57,421 DEBUG [Listener at localhost/33659] hbase.HBaseTestingUtility(3430): Waiting until all regions of table t1 get assigned. Timeout = 60000ms 2023-07-24 23:10:57,421 INFO [Listener at localhost/33659] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:57,423 INFO [Listener at localhost/33659] hbase.HBaseTestingUtility(3484): All regions for table t1 assigned to meta. Checking AM states. 2023-07-24 23:10:57,424 INFO [Listener at localhost/33659] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:57,424 INFO [Listener at localhost/33659] hbase.HBaseTestingUtility(3504): All regions for table t1 assigned. 2023-07-24 23:10:57,425 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 23:10:57,426 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-24 23:10:57,428 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 23:10:57,428 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableExistsException: t1 at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:243) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:85) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-24 23:10:57,429 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] ipc.CallRunner(144): callId: 65 service: MasterService methodName: CreateTable size: 354 connection: 172.31.14.131:36642 deadline: 1690240317425, exception=org.apache.hadoop.hbase.TableExistsException: t1 2023-07-24 23:10:57,430 INFO [Listener at localhost/33659] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:57,432 INFO [PEWorker-2] procedure2.ProcedureExecutor(1528): Rolled back pid=15, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.TableExistsException via master-create-table:org.apache.hadoop.hbase.TableExistsException: t1; CreateTableProcedure table=t1 exec-time=5 msec 2023-07-24 23:10:57,531 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 23:10:57,532 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:57,532 INFO [Listener at localhost/33659] client.HBaseAdmin$15(890): Started disable of t1 2023-07-24 23:10:57,533 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable t1 2023-07-24 23:10:57,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=t1 2023-07-24 23:10:57,536 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-24 23:10:57,536 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690240257536"}]},"ts":"1690240257536"} 2023-07-24 23:10:57,538 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLING in hbase:meta 2023-07-24 23:10:57,539 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set t1 to state=DISABLING 2023-07-24 23:10:57,540 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=1a7d83851dbfb8b95fe7dafd46d90ed8, UNASSIGN}] 2023-07-24 23:10:57,540 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=1a7d83851dbfb8b95fe7dafd46d90ed8, UNASSIGN 2023-07-24 23:10:57,541 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=1a7d83851dbfb8b95fe7dafd46d90ed8, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34441,1690240254844 2023-07-24 23:10:57,541 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1690240256814.1a7d83851dbfb8b95fe7dafd46d90ed8.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690240257541"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690240257541"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690240257541"}]},"ts":"1690240257541"} 2023-07-24 23:10:57,542 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; CloseRegionProcedure 1a7d83851dbfb8b95fe7dafd46d90ed8, server=jenkins-hbase4.apache.org,34441,1690240254844}] 2023-07-24 23:10:57,637 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-24 23:10:57,694 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1a7d83851dbfb8b95fe7dafd46d90ed8 2023-07-24 23:10:57,694 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1a7d83851dbfb8b95fe7dafd46d90ed8, disabling compactions & flushes 2023-07-24 23:10:57,694 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region t1,,1690240256814.1a7d83851dbfb8b95fe7dafd46d90ed8. 2023-07-24 23:10:57,694 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1690240256814.1a7d83851dbfb8b95fe7dafd46d90ed8. 2023-07-24 23:10:57,694 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on t1,,1690240256814.1a7d83851dbfb8b95fe7dafd46d90ed8. after waiting 0 ms 2023-07-24 23:10:57,694 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region t1,,1690240256814.1a7d83851dbfb8b95fe7dafd46d90ed8. 2023-07-24 23:10:57,698 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/default/t1/1a7d83851dbfb8b95fe7dafd46d90ed8/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 23:10:57,699 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed t1,,1690240256814.1a7d83851dbfb8b95fe7dafd46d90ed8. 2023-07-24 23:10:57,699 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1a7d83851dbfb8b95fe7dafd46d90ed8: 2023-07-24 23:10:57,700 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1a7d83851dbfb8b95fe7dafd46d90ed8 2023-07-24 23:10:57,701 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=1a7d83851dbfb8b95fe7dafd46d90ed8, regionState=CLOSED 2023-07-24 23:10:57,701 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"t1,,1690240256814.1a7d83851dbfb8b95fe7dafd46d90ed8.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690240257701"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690240257701"}]},"ts":"1690240257701"} 2023-07-24 23:10:57,706 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-24 23:10:57,706 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; CloseRegionProcedure 1a7d83851dbfb8b95fe7dafd46d90ed8, server=jenkins-hbase4.apache.org,34441,1690240254844 in 160 msec 2023-07-24 23:10:57,707 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-24 23:10:57,707 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=t1, region=1a7d83851dbfb8b95fe7dafd46d90ed8, UNASSIGN in 166 msec 2023-07-24 23:10:57,708 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690240257708"}]},"ts":"1690240257708"} 2023-07-24 23:10:57,709 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLED in hbase:meta 2023-07-24 23:10:57,712 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set t1 to state=DISABLED 2023-07-24 23:10:57,713 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; DisableTableProcedure table=t1 in 179 msec 2023-07-24 23:10:57,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-24 23:10:57,839 INFO [Listener at localhost/33659] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:t1, procId: 16 completed 2023-07-24 23:10:57,839 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete t1 2023-07-24 23:10:57,840 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=t1 2023-07-24 23:10:57,842 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-24 23:10:57,842 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 't1' from rsgroup 'default' 2023-07-24 23:10:57,842 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=19, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=t1 2023-07-24 23:10:57,844 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:57,844 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:57,845 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 23:10:57,846 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/.tmp/data/default/t1/1a7d83851dbfb8b95fe7dafd46d90ed8 2023-07-24 23:10:57,847 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-24 23:10:57,847 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/.tmp/data/default/t1/1a7d83851dbfb8b95fe7dafd46d90ed8/cf1, FileablePath, hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/.tmp/data/default/t1/1a7d83851dbfb8b95fe7dafd46d90ed8/recovered.edits] 2023-07-24 23:10:57,853 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/.tmp/data/default/t1/1a7d83851dbfb8b95fe7dafd46d90ed8/recovered.edits/4.seqid to hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/archive/data/default/t1/1a7d83851dbfb8b95fe7dafd46d90ed8/recovered.edits/4.seqid 2023-07-24 23:10:57,853 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/.tmp/data/default/t1/1a7d83851dbfb8b95fe7dafd46d90ed8 2023-07-24 23:10:57,853 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-24 23:10:57,855 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=19, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=t1 2023-07-24 23:10:57,857 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of t1 from hbase:meta 2023-07-24 23:10:57,858 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 't1' descriptor. 2023-07-24 23:10:57,859 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=19, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=t1 2023-07-24 23:10:57,859 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 't1' from region states. 2023-07-24 23:10:57,859 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1,,1690240256814.1a7d83851dbfb8b95fe7dafd46d90ed8.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690240257859"}]},"ts":"9223372036854775807"} 2023-07-24 23:10:57,861 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-24 23:10:57,861 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 1a7d83851dbfb8b95fe7dafd46d90ed8, NAME => 't1,,1690240256814.1a7d83851dbfb8b95fe7dafd46d90ed8.', STARTKEY => '', ENDKEY => ''}] 2023-07-24 23:10:57,861 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 't1' as deleted. 2023-07-24 23:10:57,861 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690240257861"}]},"ts":"9223372036854775807"} 2023-07-24 23:10:57,862 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table t1 state from META 2023-07-24 23:10:57,864 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=19, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-24 23:10:57,865 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=19, state=SUCCESS; DeleteTableProcedure table=t1 in 25 msec 2023-07-24 23:10:57,947 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-24 23:10:57,948 INFO [Listener at localhost/33659] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:t1, procId: 19 completed 2023-07-24 23:10:57,951 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:57,951 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:57,952 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 23:10:57,952 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 23:10:57,952 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 23:10:57,953 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 23:10:57,953 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 23:10:57,953 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 23:10:57,956 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:57,956 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 23:10:57,968 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 23:10:57,970 INFO [Listener at localhost/33659] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 23:10:57,971 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 23:10:57,972 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:57,973 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:57,974 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 23:10:57,975 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 23:10:57,977 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:57,977 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:57,979 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42745] to rsgroup master 2023-07-24 23:10:57,979 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42745 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 23:10:57,979 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] ipc.CallRunner(144): callId: 105 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:36642 deadline: 1690241457979, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42745 is either offline or it does not exist. 2023-07-24 23:10:57,979 WARN [Listener at localhost/33659] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42745 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42745 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 23:10:57,984 INFO [Listener at localhost/33659] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:57,985 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:57,985 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:57,985 INFO [Listener at localhost/33659] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34441, jenkins-hbase4.apache.org:39455, jenkins-hbase4.apache.org:42651, jenkins-hbase4.apache.org:44771], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 23:10:57,986 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 23:10:57,986 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:58,008 INFO [Listener at localhost/33659] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=571 (was 562) - Thread LEAK? -, OpenFileDescriptor=837 (was 837), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=304 (was 305), ProcessCount=175 (was 175), AvailableMemoryMB=7622 (was 7598) - AvailableMemoryMB LEAK? - 2023-07-24 23:10:58,008 WARN [Listener at localhost/33659] hbase.ResourceChecker(130): Thread=571 is superior to 500 2023-07-24 23:10:58,028 INFO [Listener at localhost/33659] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=571, OpenFileDescriptor=837, MaxFileDescriptor=60000, SystemLoadAverage=304, ProcessCount=175, AvailableMemoryMB=7621 2023-07-24 23:10:58,028 WARN [Listener at localhost/33659] hbase.ResourceChecker(130): Thread=571 is superior to 500 2023-07-24 23:10:58,028 INFO [Listener at localhost/33659] rsgroup.TestRSGroupsBase(132): testNonExistentTableMove 2023-07-24 23:10:58,032 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:58,032 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:58,033 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 23:10:58,033 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 23:10:58,033 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 23:10:58,034 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 23:10:58,034 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 23:10:58,035 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 23:10:58,038 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:58,038 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 23:10:58,041 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 23:10:58,043 INFO [Listener at localhost/33659] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 23:10:58,043 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 23:10:58,045 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:58,045 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:58,047 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 23:10:58,048 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 23:10:58,050 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:58,050 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:58,052 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42745] to rsgroup master 2023-07-24 23:10:58,052 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42745 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 23:10:58,052 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] ipc.CallRunner(144): callId: 133 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36642 deadline: 1690241458052, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42745 is either offline or it does not exist. 2023-07-24 23:10:58,053 WARN [Listener at localhost/33659] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42745 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42745 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 23:10:58,055 INFO [Listener at localhost/33659] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:58,055 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:58,055 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:58,056 INFO [Listener at localhost/33659] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34441, jenkins-hbase4.apache.org:39455, jenkins-hbase4.apache.org:42651, jenkins-hbase4.apache.org:44771], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 23:10:58,056 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 23:10:58,056 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:58,057 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-24 23:10:58,057 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 23:10:58,058 INFO [Listener at localhost/33659] rsgroup.TestRSGroupsAdmin1(389): Moving table GrouptestNonExistentTableMove to default 2023-07-24 23:10:58,065 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-24 23:10:58,065 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 23:10:58,068 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:58,068 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:58,068 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 23:10:58,068 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 23:10:58,069 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 23:10:58,069 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 23:10:58,069 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 23:10:58,070 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 23:10:58,072 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:58,073 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 23:10:58,074 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 23:10:58,076 INFO [Listener at localhost/33659] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 23:10:58,077 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 23:10:58,080 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:58,080 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:58,083 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 23:10:58,084 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 23:10:58,093 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:58,093 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:58,096 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42745] to rsgroup master 2023-07-24 23:10:58,096 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42745 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 23:10:58,096 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] ipc.CallRunner(144): callId: 168 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36642 deadline: 1690241458096, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42745 is either offline or it does not exist. 2023-07-24 23:10:58,097 WARN [Listener at localhost/33659] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42745 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42745 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 23:10:58,099 INFO [Listener at localhost/33659] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:58,112 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:58,112 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:58,113 INFO [Listener at localhost/33659] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34441, jenkins-hbase4.apache.org:39455, jenkins-hbase4.apache.org:42651, jenkins-hbase4.apache.org:44771], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 23:10:58,113 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 23:10:58,113 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:58,133 INFO [Listener at localhost/33659] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=573 (was 571) - Thread LEAK? -, OpenFileDescriptor=837 (was 837), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=304 (was 304), ProcessCount=175 (was 175), AvailableMemoryMB=7621 (was 7621) 2023-07-24 23:10:58,133 WARN [Listener at localhost/33659] hbase.ResourceChecker(130): Thread=573 is superior to 500 2023-07-24 23:10:58,157 INFO [Listener at localhost/33659] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=573, OpenFileDescriptor=837, MaxFileDescriptor=60000, SystemLoadAverage=304, ProcessCount=175, AvailableMemoryMB=7621 2023-07-24 23:10:58,157 WARN [Listener at localhost/33659] hbase.ResourceChecker(130): Thread=573 is superior to 500 2023-07-24 23:10:58,157 INFO [Listener at localhost/33659] rsgroup.TestRSGroupsBase(132): testGroupInfoMultiAccessing 2023-07-24 23:10:58,168 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:58,168 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:58,169 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 23:10:58,169 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 23:10:58,169 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 23:10:58,169 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 23:10:58,170 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 23:10:58,170 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 23:10:58,177 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:58,177 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 23:10:58,179 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 23:10:58,182 INFO [Listener at localhost/33659] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 23:10:58,182 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 23:10:58,188 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:58,188 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:58,190 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 23:10:58,192 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 23:10:58,194 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:58,195 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:58,196 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42745] to rsgroup master 2023-07-24 23:10:58,196 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42745 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 23:10:58,197 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] ipc.CallRunner(144): callId: 196 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36642 deadline: 1690241458196, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42745 is either offline or it does not exist. 2023-07-24 23:10:58,197 WARN [Listener at localhost/33659] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42745 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42745 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 23:10:58,199 INFO [Listener at localhost/33659] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:58,199 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:58,199 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:58,200 INFO [Listener at localhost/33659] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34441, jenkins-hbase4.apache.org:39455, jenkins-hbase4.apache.org:42651, jenkins-hbase4.apache.org:44771], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 23:10:58,200 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 23:10:58,201 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:58,204 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:58,204 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:58,205 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 23:10:58,205 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 23:10:58,205 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 23:10:58,206 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 23:10:58,206 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 23:10:58,207 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 23:10:58,210 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:58,210 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 23:10:58,214 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 23:10:58,217 INFO [Listener at localhost/33659] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 23:10:58,218 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 23:10:58,220 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:58,220 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:58,221 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 23:10:58,222 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 23:10:58,225 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:58,225 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:58,227 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42745] to rsgroup master 2023-07-24 23:10:58,227 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42745 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 23:10:58,227 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] ipc.CallRunner(144): callId: 224 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36642 deadline: 1690241458226, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42745 is either offline or it does not exist. 2023-07-24 23:10:58,227 WARN [Listener at localhost/33659] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42745 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42745 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 23:10:58,229 INFO [Listener at localhost/33659] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:58,230 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:58,230 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:58,230 INFO [Listener at localhost/33659] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34441, jenkins-hbase4.apache.org:39455, jenkins-hbase4.apache.org:42651, jenkins-hbase4.apache.org:44771], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 23:10:58,231 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 23:10:58,231 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:58,253 INFO [Listener at localhost/33659] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=574 (was 573) - Thread LEAK? -, OpenFileDescriptor=837 (was 837), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=304 (was 304), ProcessCount=175 (was 175), AvailableMemoryMB=7621 (was 7621) 2023-07-24 23:10:58,253 WARN [Listener at localhost/33659] hbase.ResourceChecker(130): Thread=574 is superior to 500 2023-07-24 23:10:58,274 INFO [Listener at localhost/33659] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=574, OpenFileDescriptor=837, MaxFileDescriptor=60000, SystemLoadAverage=304, ProcessCount=175, AvailableMemoryMB=7621 2023-07-24 23:10:58,275 WARN [Listener at localhost/33659] hbase.ResourceChecker(130): Thread=574 is superior to 500 2023-07-24 23:10:58,275 INFO [Listener at localhost/33659] rsgroup.TestRSGroupsBase(132): testNamespaceConstraint 2023-07-24 23:10:58,279 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:58,279 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:58,280 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 23:10:58,281 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 23:10:58,281 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 23:10:58,281 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 23:10:58,282 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 23:10:58,282 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 23:10:58,286 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:58,287 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 23:10:58,289 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 23:10:58,293 INFO [Listener at localhost/33659] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 23:10:58,294 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 23:10:58,296 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:58,297 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:58,298 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 23:10:58,299 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 23:10:58,302 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:58,302 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:58,304 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42745] to rsgroup master 2023-07-24 23:10:58,304 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42745 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 23:10:58,304 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] ipc.CallRunner(144): callId: 252 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36642 deadline: 1690241458304, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42745 is either offline or it does not exist. 2023-07-24 23:10:58,305 WARN [Listener at localhost/33659] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42745 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42745 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 23:10:58,307 INFO [Listener at localhost/33659] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:58,308 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:58,308 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:58,308 INFO [Listener at localhost/33659] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34441, jenkins-hbase4.apache.org:39455, jenkins-hbase4.apache.org:42651, jenkins-hbase4.apache.org:44771], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 23:10:58,309 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 23:10:58,309 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:58,309 INFO [Listener at localhost/33659] rsgroup.TestRSGroupsAdmin1(154): testNamespaceConstraint 2023-07-24 23:10:58,310 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_foo 2023-07-24 23:10:58,313 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-24 23:10:58,315 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:58,315 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:58,316 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 23:10:58,318 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 23:10:58,321 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:58,321 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:58,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-24 23:10:58,324 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-24 23:10:58,327 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-24 23:10:58,331 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 23:10:58,333 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 9 msec 2023-07-24 23:10:58,428 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-24 23:10:58,429 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-24 23:10:58,431 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:504) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 23:10:58,431 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] ipc.CallRunner(144): callId: 268 service: MasterService methodName: ExecMasterService size: 91 connection: 172.31.14.131:36642 deadline: 1690241458429, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo 2023-07-24 23:10:58,437 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.HMaster$16(3053): Client=jenkins//172.31.14.131 modify {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-24 23:10:58,445 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=Group_foo 2023-07-24 23:10:58,453 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-24 23:10:58,456 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-24 23:10:58,457 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; ModifyNamespaceProcedure, namespace=Group_foo in 18 msec 2023-07-24 23:10:58,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-24 23:10:58,555 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_anotherGroup 2023-07-24 23:10:58,557 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-24 23:10:58,559 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:58,559 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-24 23:10:58,559 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:58,560 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-24 23:10:58,564 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 23:10:58,566 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:58,566 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:58,568 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete Group_foo 2023-07-24 23:10:58,569 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] procedure2.ProcedureExecutor(1029): Stored pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-24 23:10:58,571 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-24 23:10:58,573 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-24 23:10:58,573 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-24 23:10:58,574 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-24 23:10:58,575 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-24 23:10:58,576 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 23:10:58,576 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-24 23:10:58,578 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-24 23:10:58,578 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=22, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 10 msec 2023-07-24 23:10:58,674 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-24 23:10:58,675 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-24 23:10:58,678 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-24 23:10:58,678 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:58,679 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:58,679 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-24 23:10:58,680 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 23:10:58,682 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:58,682 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:58,684 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateNamespace(RSGroupAdminEndpoint.java:591) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:222) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateNamespace(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:3010) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:3007) at org.apache.hadoop.hbase.master.MasterRpcServices.createNamespace(MasterRpcServices.java:684) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 23:10:58,684 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] ipc.CallRunner(144): callId: 290 service: MasterService methodName: CreateNamespace size: 70 connection: 172.31.14.131:36642 deadline: 1690240318684, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. 2023-07-24 23:10:58,687 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:58,687 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:58,688 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 23:10:58,688 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 23:10:58,688 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 23:10:58,688 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 23:10:58,688 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 23:10:58,689 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_anotherGroup 2023-07-24 23:10:58,691 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:58,691 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:58,691 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-24 23:10:58,694 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 23:10:58,695 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 23:10:58,695 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 23:10:58,695 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 23:10:58,695 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 23:10:58,695 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 23:10:58,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 23:10:58,698 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:58,698 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 23:10:58,699 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 23:10:58,701 INFO [Listener at localhost/33659] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 23:10:58,702 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 23:10:58,703 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 23:10:58,704 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 23:10:58,705 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 23:10:58,707 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 23:10:58,708 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:58,708 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:58,710 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42745] to rsgroup master 2023-07-24 23:10:58,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42745 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 23:10:58,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] ipc.CallRunner(144): callId: 320 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36642 deadline: 1690241458710, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42745 is either offline or it does not exist. 2023-07-24 23:10:58,710 WARN [Listener at localhost/33659] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42745 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42745 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 23:10:58,712 INFO [Listener at localhost/33659] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 23:10:58,713 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 23:10:58,713 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 23:10:58,713 INFO [Listener at localhost/33659] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34441, jenkins-hbase4.apache.org:39455, jenkins-hbase4.apache.org:42651, jenkins-hbase4.apache.org:44771], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 23:10:58,713 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 23:10:58,713 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42745] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 23:10:58,732 INFO [Listener at localhost/33659] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=574 (was 574), OpenFileDescriptor=837 (was 837), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=304 (was 304), ProcessCount=175 (was 175), AvailableMemoryMB=7617 (was 7621) 2023-07-24 23:10:58,732 WARN [Listener at localhost/33659] hbase.ResourceChecker(130): Thread=574 is superior to 500 2023-07-24 23:10:58,733 INFO [Listener at localhost/33659] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-24 23:10:58,733 INFO [Listener at localhost/33659] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-24 23:10:58,733 DEBUG [Listener at localhost/33659] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5f686b78 to 127.0.0.1:61494 2023-07-24 23:10:58,733 DEBUG [Listener at localhost/33659] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 23:10:58,733 DEBUG [Listener at localhost/33659] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-24 23:10:58,733 DEBUG [Listener at localhost/33659] util.JVMClusterUtil(257): Found active master hash=1959787904, stopped=false 2023-07-24 23:10:58,733 DEBUG [Listener at localhost/33659] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-24 23:10:58,733 DEBUG [Listener at localhost/33659] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-24 23:10:58,733 INFO [Listener at localhost/33659] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,42745,1690240254672 2023-07-24 23:10:58,740 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): regionserver:44771-0x101999a05ba0003, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 23:10:58,740 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): regionserver:42651-0x101999a05ba000b, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 23:10:58,740 INFO [Listener at localhost/33659] procedure2.ProcedureExecutor(629): Stopping 2023-07-24 23:10:58,740 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 23:10:58,740 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 23:10:58,740 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): regionserver:34441-0x101999a05ba0001, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 23:10:58,740 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): regionserver:39455-0x101999a05ba0002, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 23:10:58,740 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44771-0x101999a05ba0003, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 23:10:58,740 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42651-0x101999a05ba000b, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 23:10:58,740 DEBUG [Listener at localhost/33659] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5a33b819 to 127.0.0.1:61494 2023-07-24 23:10:58,741 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 23:10:58,741 DEBUG [Listener at localhost/33659] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 23:10:58,741 INFO [Listener at localhost/33659] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,34441,1690240254844' ***** 2023-07-24 23:10:58,741 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34441-0x101999a05ba0001, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 23:10:58,741 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39455-0x101999a05ba0002, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 23:10:58,741 INFO [Listener at localhost/33659] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 23:10:58,741 INFO [Listener at localhost/33659] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,39455,1690240254990' ***** 2023-07-24 23:10:58,741 INFO [Listener at localhost/33659] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 23:10:58,741 INFO [RS:0;jenkins-hbase4:34441] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 23:10:58,742 INFO [RS:1;jenkins-hbase4:39455] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 23:10:58,741 INFO [Listener at localhost/33659] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,44771,1690240255146' ***** 2023-07-24 23:10:58,742 INFO [Listener at localhost/33659] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 23:10:58,743 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 23:10:58,744 INFO [RS:2;jenkins-hbase4:44771] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 23:10:58,744 INFO [Listener at localhost/33659] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,42651,1690240256459' ***** 2023-07-24 23:10:58,746 INFO [Listener at localhost/33659] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 23:10:58,746 INFO [RS:3;jenkins-hbase4:42651] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 23:10:58,749 INFO [RS:1;jenkins-hbase4:39455] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@5ef7553{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 23:10:58,750 INFO [RS:2;jenkins-hbase4:44771] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@5ecbfcb6{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 23:10:58,749 INFO [RS:0;jenkins-hbase4:34441] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1d5b597{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 23:10:58,750 INFO [RS:3;jenkins-hbase4:42651] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@7cbca702{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 23:10:58,750 INFO [RS:1;jenkins-hbase4:39455] server.AbstractConnector(383): Stopped ServerConnector@81fc2e1{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 23:10:58,750 INFO [RS:1;jenkins-hbase4:39455] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 23:10:58,751 INFO [RS:0;jenkins-hbase4:34441] server.AbstractConnector(383): Stopped ServerConnector@473a089d{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 23:10:58,751 INFO [RS:2;jenkins-hbase4:44771] server.AbstractConnector(383): Stopped ServerConnector@3095489{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 23:10:58,751 INFO [RS:1;jenkins-hbase4:39455] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@21da4d95{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 23:10:58,751 INFO [RS:2;jenkins-hbase4:44771] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 23:10:58,751 INFO [RS:0;jenkins-hbase4:34441] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 23:10:58,751 INFO [RS:3;jenkins-hbase4:42651] server.AbstractConnector(383): Stopped ServerConnector@a0cb217{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 23:10:58,752 INFO [RS:1;jenkins-hbase4:39455] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@39bb29dd{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/hadoop.log.dir/,STOPPED} 2023-07-24 23:10:58,753 INFO [RS:2;jenkins-hbase4:44771] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6b2fe1d6{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 23:10:58,754 INFO [RS:0;jenkins-hbase4:34441] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@17847e44{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 23:10:58,755 INFO [RS:2;jenkins-hbase4:44771] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@23981551{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/hadoop.log.dir/,STOPPED} 2023-07-24 23:10:58,756 INFO [RS:1;jenkins-hbase4:39455] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 23:10:58,754 INFO [RS:3;jenkins-hbase4:42651] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 23:10:58,756 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 23:10:58,756 INFO [RS:1;jenkins-hbase4:39455] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 23:10:58,756 INFO [RS:0;jenkins-hbase4:34441] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@61ab34ff{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/hadoop.log.dir/,STOPPED} 2023-07-24 23:10:58,757 INFO [RS:3;jenkins-hbase4:42651] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@39e57d0b{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 23:10:58,757 INFO [RS:1;jenkins-hbase4:39455] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 23:10:58,758 INFO [RS:3;jenkins-hbase4:42651] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7b465921{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/hadoop.log.dir/,STOPPED} 2023-07-24 23:10:58,759 INFO [RS:2;jenkins-hbase4:44771] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 23:10:58,759 INFO [RS:0;jenkins-hbase4:34441] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 23:10:58,759 INFO [RS:2;jenkins-hbase4:44771] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 23:10:58,759 INFO [RS:3;jenkins-hbase4:42651] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 23:10:58,758 INFO [RS:1;jenkins-hbase4:39455] regionserver.HRegionServer(3305): Received CLOSE for 74fba4b5e736643fad81f5eef3c41f40 2023-07-24 23:10:58,759 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 23:10:58,759 INFO [RS:3;jenkins-hbase4:42651] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 23:10:58,759 INFO [RS:3;jenkins-hbase4:42651] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 23:10:58,759 INFO [RS:3;jenkins-hbase4:42651] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,42651,1690240256459 2023-07-24 23:10:58,759 INFO [RS:2;jenkins-hbase4:44771] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 23:10:58,759 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 23:10:58,760 INFO [RS:2;jenkins-hbase4:44771] regionserver.HRegionServer(3305): Received CLOSE for 36cfb92311c6e84a9b8aed595f98e6f3 2023-07-24 23:10:58,760 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 23:10:58,759 INFO [RS:0;jenkins-hbase4:34441] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 23:10:58,759 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 23:10:58,760 INFO [RS:0;jenkins-hbase4:34441] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 23:10:58,760 DEBUG [RS:3;jenkins-hbase4:42651] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0c8a098c to 127.0.0.1:61494 2023-07-24 23:10:58,760 INFO [RS:0;jenkins-hbase4:34441] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34441,1690240254844 2023-07-24 23:10:58,760 DEBUG [RS:3;jenkins-hbase4:42651] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 23:10:58,760 DEBUG [RS:0;jenkins-hbase4:34441] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0fc2a32e to 127.0.0.1:61494 2023-07-24 23:10:58,760 INFO [RS:3;jenkins-hbase4:42651] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,42651,1690240256459; all regions closed. 2023-07-24 23:10:58,760 DEBUG [RS:0;jenkins-hbase4:34441] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 23:10:58,761 INFO [RS:0;jenkins-hbase4:34441] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34441,1690240254844; all regions closed. 2023-07-24 23:10:58,761 INFO [RS:2;jenkins-hbase4:44771] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44771,1690240255146 2023-07-24 23:10:58,761 DEBUG [RS:2;jenkins-hbase4:44771] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1162d826 to 127.0.0.1:61494 2023-07-24 23:10:58,760 INFO [RS:1;jenkins-hbase4:39455] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,39455,1690240254990 2023-07-24 23:10:58,761 DEBUG [RS:2;jenkins-hbase4:44771] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 23:10:58,761 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 36cfb92311c6e84a9b8aed595f98e6f3, disabling compactions & flushes 2023-07-24 23:10:58,761 DEBUG [RS:1;jenkins-hbase4:39455] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7be39a23 to 127.0.0.1:61494 2023-07-24 23:10:58,761 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 74fba4b5e736643fad81f5eef3c41f40, disabling compactions & flushes 2023-07-24 23:10:58,761 DEBUG [RS:1;jenkins-hbase4:39455] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 23:10:58,761 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690240255918.36cfb92311c6e84a9b8aed595f98e6f3. 2023-07-24 23:10:58,761 INFO [RS:2;jenkins-hbase4:44771] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 23:10:58,761 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690240255918.36cfb92311c6e84a9b8aed595f98e6f3. 2023-07-24 23:10:58,762 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690240255918.36cfb92311c6e84a9b8aed595f98e6f3. after waiting 0 ms 2023-07-24 23:10:58,762 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690240255918.36cfb92311c6e84a9b8aed595f98e6f3. 2023-07-24 23:10:58,761 INFO [RS:1;jenkins-hbase4:39455] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-24 23:10:58,762 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 36cfb92311c6e84a9b8aed595f98e6f3 1/1 column families, dataSize=267 B heapSize=904 B 2023-07-24 23:10:58,761 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690240256033.74fba4b5e736643fad81f5eef3c41f40. 2023-07-24 23:10:58,762 DEBUG [RS:1;jenkins-hbase4:39455] regionserver.HRegionServer(1478): Online Regions={74fba4b5e736643fad81f5eef3c41f40=hbase:rsgroup,,1690240256033.74fba4b5e736643fad81f5eef3c41f40.} 2023-07-24 23:10:58,762 INFO [RS:2;jenkins-hbase4:44771] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 23:10:58,762 DEBUG [RS:1;jenkins-hbase4:39455] regionserver.HRegionServer(1504): Waiting on 74fba4b5e736643fad81f5eef3c41f40 2023-07-24 23:10:58,762 INFO [RS:2;jenkins-hbase4:44771] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 23:10:58,762 INFO [RS:2;jenkins-hbase4:44771] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-24 23:10:58,762 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690240256033.74fba4b5e736643fad81f5eef3c41f40. 2023-07-24 23:10:58,762 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690240256033.74fba4b5e736643fad81f5eef3c41f40. after waiting 0 ms 2023-07-24 23:10:58,762 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690240256033.74fba4b5e736643fad81f5eef3c41f40. 2023-07-24 23:10:58,762 INFO [RS:2;jenkins-hbase4:44771] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-24 23:10:58,763 DEBUG [RS:2;jenkins-hbase4:44771] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, 36cfb92311c6e84a9b8aed595f98e6f3=hbase:namespace,,1690240255918.36cfb92311c6e84a9b8aed595f98e6f3.} 2023-07-24 23:10:58,763 DEBUG [RS:2;jenkins-hbase4:44771] regionserver.HRegionServer(1504): Waiting on 1588230740, 36cfb92311c6e84a9b8aed595f98e6f3 2023-07-24 23:10:58,763 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 74fba4b5e736643fad81f5eef3c41f40 1/1 column families, dataSize=6.43 KB heapSize=10.63 KB 2023-07-24 23:10:58,763 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-24 23:10:58,763 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-24 23:10:58,763 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-24 23:10:58,763 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-24 23:10:58,763 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-24 23:10:58,763 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.51 KB heapSize=8.81 KB 2023-07-24 23:10:58,774 DEBUG [RS:3;jenkins-hbase4:42651] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/oldWALs 2023-07-24 23:10:58,774 INFO [RS:3;jenkins-hbase4:42651] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C42651%2C1690240256459:(num 1690240256783) 2023-07-24 23:10:58,774 DEBUG [RS:3;jenkins-hbase4:42651] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 23:10:58,774 INFO [RS:3;jenkins-hbase4:42651] regionserver.LeaseManager(133): Closed leases 2023-07-24 23:10:58,775 INFO [RS:3;jenkins-hbase4:42651] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-24 23:10:58,775 INFO [RS:3;jenkins-hbase4:42651] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 23:10:58,775 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 23:10:58,775 INFO [RS:3;jenkins-hbase4:42651] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 23:10:58,775 INFO [RS:3;jenkins-hbase4:42651] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 23:10:58,777 INFO [RS:3;jenkins-hbase4:42651] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:42651 2023-07-24 23:10:58,782 DEBUG [RS:0;jenkins-hbase4:34441] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/oldWALs 2023-07-24 23:10:58,782 INFO [RS:0;jenkins-hbase4:34441] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C34441%2C1690240254844:(num 1690240255773) 2023-07-24 23:10:58,782 DEBUG [RS:0;jenkins-hbase4:34441] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 23:10:58,783 INFO [RS:0;jenkins-hbase4:34441] regionserver.LeaseManager(133): Closed leases 2023-07-24 23:10:58,788 INFO [RS:0;jenkins-hbase4:34441] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-24 23:10:58,788 INFO [RS:0;jenkins-hbase4:34441] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 23:10:58,788 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 23:10:58,788 INFO [RS:0;jenkins-hbase4:34441] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 23:10:58,788 INFO [RS:0;jenkins-hbase4:34441] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 23:10:58,789 INFO [RS:0;jenkins-hbase4:34441] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34441 2023-07-24 23:10:58,799 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 23:10:58,799 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-24 23:10:58,803 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=267 B at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/hbase/namespace/36cfb92311c6e84a9b8aed595f98e6f3/.tmp/info/4839f8885d254a8db36e56d563896e6f 2023-07-24 23:10:58,806 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.01 KB at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/hbase/meta/1588230740/.tmp/info/c06592349bd349669623cd3b15c18932 2023-07-24 23:10:58,823 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 23:10:58,825 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4839f8885d254a8db36e56d563896e6f 2023-07-24 23:10:58,826 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/hbase/namespace/36cfb92311c6e84a9b8aed595f98e6f3/.tmp/info/4839f8885d254a8db36e56d563896e6f as hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/hbase/namespace/36cfb92311c6e84a9b8aed595f98e6f3/info/4839f8885d254a8db36e56d563896e6f 2023-07-24 23:10:58,827 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c06592349bd349669623cd3b15c18932 2023-07-24 23:10:58,836 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4839f8885d254a8db36e56d563896e6f 2023-07-24 23:10:58,837 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/hbase/namespace/36cfb92311c6e84a9b8aed595f98e6f3/info/4839f8885d254a8db36e56d563896e6f, entries=3, sequenceid=9, filesize=5.0 K 2023-07-24 23:10:58,838 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~267 B/267, heapSize ~888 B/888, currentSize=0 B/0 for 36cfb92311c6e84a9b8aed595f98e6f3 in 76ms, sequenceid=9, compaction requested=false 2023-07-24 23:10:58,841 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 23:10:58,858 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/hbase/namespace/36cfb92311c6e84a9b8aed595f98e6f3/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-24 23:10:58,860 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690240255918.36cfb92311c6e84a9b8aed595f98e6f3. 2023-07-24 23:10:58,860 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 36cfb92311c6e84a9b8aed595f98e6f3: 2023-07-24 23:10:58,860 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1690240255918.36cfb92311c6e84a9b8aed595f98e6f3. 2023-07-24 23:10:58,863 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=82 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/hbase/meta/1588230740/.tmp/rep_barrier/eb97d30a3f2944cc938068603c8e7689 2023-07-24 23:10:58,865 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): regionserver:34441-0x101999a05ba0001, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34441,1690240254844 2023-07-24 23:10:58,865 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): regionserver:34441-0x101999a05ba0001, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 23:10:58,865 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): regionserver:34441-0x101999a05ba0001, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42651,1690240256459 2023-07-24 23:10:58,865 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): regionserver:44771-0x101999a05ba0003, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34441,1690240254844 2023-07-24 23:10:58,865 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): regionserver:44771-0x101999a05ba0003, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 23:10:58,865 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 23:10:58,865 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): regionserver:39455-0x101999a05ba0002, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34441,1690240254844 2023-07-24 23:10:58,865 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): regionserver:39455-0x101999a05ba0002, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 23:10:58,865 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): regionserver:39455-0x101999a05ba0002, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42651,1690240256459 2023-07-24 23:10:58,865 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): regionserver:42651-0x101999a05ba000b, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34441,1690240254844 2023-07-24 23:10:58,865 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): regionserver:42651-0x101999a05ba000b, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 23:10:58,865 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): regionserver:42651-0x101999a05ba000b, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42651,1690240256459 2023-07-24 23:10:58,866 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): regionserver:44771-0x101999a05ba0003, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42651,1690240256459 2023-07-24 23:10:58,866 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,34441,1690240254844] 2023-07-24 23:10:58,866 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,34441,1690240254844; numProcessing=1 2023-07-24 23:10:58,869 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,34441,1690240254844 already deleted, retry=false 2023-07-24 23:10:58,869 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,34441,1690240254844 expired; onlineServers=3 2023-07-24 23:10:58,869 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,42651,1690240256459] 2023-07-24 23:10:58,869 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,42651,1690240256459; numProcessing=2 2023-07-24 23:10:58,869 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for eb97d30a3f2944cc938068603c8e7689 2023-07-24 23:10:58,870 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,42651,1690240256459 already deleted, retry=false 2023-07-24 23:10:58,870 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,42651,1690240256459 expired; onlineServers=2 2023-07-24 23:10:58,884 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=428 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/hbase/meta/1588230740/.tmp/table/0641acb59ea2416a9fd45b947fc35990 2023-07-24 23:10:58,890 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0641acb59ea2416a9fd45b947fc35990 2023-07-24 23:10:58,891 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/hbase/meta/1588230740/.tmp/info/c06592349bd349669623cd3b15c18932 as hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/hbase/meta/1588230740/info/c06592349bd349669623cd3b15c18932 2023-07-24 23:10:58,897 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c06592349bd349669623cd3b15c18932 2023-07-24 23:10:58,897 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/hbase/meta/1588230740/info/c06592349bd349669623cd3b15c18932, entries=22, sequenceid=26, filesize=7.3 K 2023-07-24 23:10:58,898 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/hbase/meta/1588230740/.tmp/rep_barrier/eb97d30a3f2944cc938068603c8e7689 as hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/hbase/meta/1588230740/rep_barrier/eb97d30a3f2944cc938068603c8e7689 2023-07-24 23:10:58,903 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for eb97d30a3f2944cc938068603c8e7689 2023-07-24 23:10:58,903 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/hbase/meta/1588230740/rep_barrier/eb97d30a3f2944cc938068603c8e7689, entries=1, sequenceid=26, filesize=4.9 K 2023-07-24 23:10:58,904 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/hbase/meta/1588230740/.tmp/table/0641acb59ea2416a9fd45b947fc35990 as hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/hbase/meta/1588230740/table/0641acb59ea2416a9fd45b947fc35990 2023-07-24 23:10:58,909 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0641acb59ea2416a9fd45b947fc35990 2023-07-24 23:10:58,909 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/hbase/meta/1588230740/table/0641acb59ea2416a9fd45b947fc35990, entries=6, sequenceid=26, filesize=5.1 K 2023-07-24 23:10:58,910 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.51 KB/4614, heapSize ~8.77 KB/8976, currentSize=0 B/0 for 1588230740 in 147ms, sequenceid=26, compaction requested=false 2023-07-24 23:10:58,919 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/hbase/meta/1588230740/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-24 23:10:58,920 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 23:10:58,920 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-24 23:10:58,920 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-24 23:10:58,920 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-24 23:10:58,962 DEBUG [RS:1;jenkins-hbase4:39455] regionserver.HRegionServer(1504): Waiting on 74fba4b5e736643fad81f5eef3c41f40 2023-07-24 23:10:58,963 INFO [RS:2;jenkins-hbase4:44771] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44771,1690240255146; all regions closed. 2023-07-24 23:10:58,968 DEBUG [RS:2;jenkins-hbase4:44771] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/oldWALs 2023-07-24 23:10:58,968 INFO [RS:2;jenkins-hbase4:44771] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44771%2C1690240255146.meta:.meta(num 1690240255866) 2023-07-24 23:10:58,974 DEBUG [RS:2;jenkins-hbase4:44771] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/oldWALs 2023-07-24 23:10:58,974 INFO [RS:2;jenkins-hbase4:44771] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44771%2C1690240255146:(num 1690240255775) 2023-07-24 23:10:58,974 DEBUG [RS:2;jenkins-hbase4:44771] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 23:10:58,974 INFO [RS:2;jenkins-hbase4:44771] regionserver.LeaseManager(133): Closed leases 2023-07-24 23:10:58,974 INFO [RS:2;jenkins-hbase4:44771] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-24 23:10:58,974 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 23:10:58,975 INFO [RS:2;jenkins-hbase4:44771] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44771 2023-07-24 23:10:58,976 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): regionserver:44771-0x101999a05ba0003, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44771,1690240255146 2023-07-24 23:10:58,977 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 23:10:58,977 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): regionserver:39455-0x101999a05ba0002, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44771,1690240255146 2023-07-24 23:10:58,978 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,44771,1690240255146] 2023-07-24 23:10:58,978 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,44771,1690240255146; numProcessing=3 2023-07-24 23:10:58,979 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,44771,1690240255146 already deleted, retry=false 2023-07-24 23:10:58,979 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,44771,1690240255146 expired; onlineServers=1 2023-07-24 23:10:59,162 DEBUG [RS:1;jenkins-hbase4:39455] regionserver.HRegionServer(1504): Waiting on 74fba4b5e736643fad81f5eef3c41f40 2023-07-24 23:10:59,212 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.43 KB at sequenceid=29 (bloomFilter=true), to=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/hbase/rsgroup/74fba4b5e736643fad81f5eef3c41f40/.tmp/m/49d56b4d9f1545509feaf3ea9906e5f9 2023-07-24 23:10:59,218 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 49d56b4d9f1545509feaf3ea9906e5f9 2023-07-24 23:10:59,219 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/hbase/rsgroup/74fba4b5e736643fad81f5eef3c41f40/.tmp/m/49d56b4d9f1545509feaf3ea9906e5f9 as hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/hbase/rsgroup/74fba4b5e736643fad81f5eef3c41f40/m/49d56b4d9f1545509feaf3ea9906e5f9 2023-07-24 23:10:59,224 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 49d56b4d9f1545509feaf3ea9906e5f9 2023-07-24 23:10:59,224 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/hbase/rsgroup/74fba4b5e736643fad81f5eef3c41f40/m/49d56b4d9f1545509feaf3ea9906e5f9, entries=12, sequenceid=29, filesize=5.4 K 2023-07-24 23:10:59,225 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.43 KB/6586, heapSize ~10.61 KB/10864, currentSize=0 B/0 for 74fba4b5e736643fad81f5eef3c41f40 in 463ms, sequenceid=29, compaction requested=false 2023-07-24 23:10:59,230 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/data/hbase/rsgroup/74fba4b5e736643fad81f5eef3c41f40/recovered.edits/32.seqid, newMaxSeqId=32, maxSeqId=1 2023-07-24 23:10:59,230 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 23:10:59,231 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690240256033.74fba4b5e736643fad81f5eef3c41f40. 2023-07-24 23:10:59,231 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 74fba4b5e736643fad81f5eef3c41f40: 2023-07-24 23:10:59,231 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1690240256033.74fba4b5e736643fad81f5eef3c41f40. 2023-07-24 23:10:59,340 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): regionserver:44771-0x101999a05ba0003, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 23:10:59,340 INFO [RS:2;jenkins-hbase4:44771] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44771,1690240255146; zookeeper connection closed. 2023-07-24 23:10:59,340 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): regionserver:44771-0x101999a05ba0003, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 23:10:59,341 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@3e5828e] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@3e5828e 2023-07-24 23:10:59,363 INFO [RS:1;jenkins-hbase4:39455] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,39455,1690240254990; all regions closed. 2023-07-24 23:10:59,369 DEBUG [RS:1;jenkins-hbase4:39455] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/oldWALs 2023-07-24 23:10:59,369 INFO [RS:1;jenkins-hbase4:39455] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C39455%2C1690240254990:(num 1690240255779) 2023-07-24 23:10:59,369 DEBUG [RS:1;jenkins-hbase4:39455] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 23:10:59,369 INFO [RS:1;jenkins-hbase4:39455] regionserver.LeaseManager(133): Closed leases 2023-07-24 23:10:59,369 INFO [RS:1;jenkins-hbase4:39455] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-24 23:10:59,369 INFO [RS:1;jenkins-hbase4:39455] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 23:10:59,369 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 23:10:59,369 INFO [RS:1;jenkins-hbase4:39455] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 23:10:59,370 INFO [RS:1;jenkins-hbase4:39455] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 23:10:59,371 INFO [RS:1;jenkins-hbase4:39455] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:39455 2023-07-24 23:10:59,373 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 23:10:59,373 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): regionserver:39455-0x101999a05ba0002, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39455,1690240254990 2023-07-24 23:10:59,374 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,39455,1690240254990] 2023-07-24 23:10:59,374 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,39455,1690240254990; numProcessing=4 2023-07-24 23:10:59,375 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,39455,1690240254990 already deleted, retry=false 2023-07-24 23:10:59,375 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,39455,1690240254990 expired; onlineServers=0 2023-07-24 23:10:59,375 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,42745,1690240254672' ***** 2023-07-24 23:10:59,375 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-24 23:10:59,376 DEBUG [M:0;jenkins-hbase4:42745] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@70f077c1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 23:10:59,376 INFO [M:0;jenkins-hbase4:42745] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 23:10:59,379 INFO [M:0;jenkins-hbase4:42745] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@f50000{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-24 23:10:59,379 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-24 23:10:59,379 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 23:10:59,379 INFO [M:0;jenkins-hbase4:42745] server.AbstractConnector(383): Stopped ServerConnector@18478842{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 23:10:59,379 INFO [M:0;jenkins-hbase4:42745] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 23:10:59,379 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 23:10:59,380 INFO [M:0;jenkins-hbase4:42745] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7ddd5322{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 23:10:59,380 INFO [M:0;jenkins-hbase4:42745] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6c61d2b1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/hadoop.log.dir/,STOPPED} 2023-07-24 23:10:59,380 INFO [M:0;jenkins-hbase4:42745] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,42745,1690240254672 2023-07-24 23:10:59,381 INFO [M:0;jenkins-hbase4:42745] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,42745,1690240254672; all regions closed. 2023-07-24 23:10:59,381 DEBUG [M:0;jenkins-hbase4:42745] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 23:10:59,381 INFO [M:0;jenkins-hbase4:42745] master.HMaster(1491): Stopping master jetty server 2023-07-24 23:10:59,381 INFO [M:0;jenkins-hbase4:42745] server.AbstractConnector(383): Stopped ServerConnector@9fb3333{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 23:10:59,381 DEBUG [M:0;jenkins-hbase4:42745] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-24 23:10:59,382 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-24 23:10:59,382 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690240255446] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690240255446,5,FailOnTimeoutGroup] 2023-07-24 23:10:59,382 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690240255446] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690240255446,5,FailOnTimeoutGroup] 2023-07-24 23:10:59,382 DEBUG [M:0;jenkins-hbase4:42745] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-24 23:10:59,382 INFO [M:0;jenkins-hbase4:42745] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-24 23:10:59,382 INFO [M:0;jenkins-hbase4:42745] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-24 23:10:59,382 INFO [M:0;jenkins-hbase4:42745] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-24 23:10:59,382 DEBUG [M:0;jenkins-hbase4:42745] master.HMaster(1512): Stopping service threads 2023-07-24 23:10:59,382 INFO [M:0;jenkins-hbase4:42745] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-24 23:10:59,382 ERROR [M:0;jenkins-hbase4:42745] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-24 23:10:59,382 INFO [M:0;jenkins-hbase4:42745] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-24 23:10:59,382 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-24 23:10:59,383 DEBUG [M:0;jenkins-hbase4:42745] zookeeper.ZKUtil(398): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-24 23:10:59,383 WARN [M:0;jenkins-hbase4:42745] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-24 23:10:59,383 INFO [M:0;jenkins-hbase4:42745] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-24 23:10:59,383 INFO [M:0;jenkins-hbase4:42745] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-24 23:10:59,383 DEBUG [M:0;jenkins-hbase4:42745] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-24 23:10:59,383 INFO [M:0;jenkins-hbase4:42745] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 23:10:59,383 DEBUG [M:0;jenkins-hbase4:42745] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 23:10:59,383 DEBUG [M:0;jenkins-hbase4:42745] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-24 23:10:59,383 DEBUG [M:0;jenkins-hbase4:42745] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 23:10:59,383 INFO [M:0;jenkins-hbase4:42745] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=76.23 KB heapSize=90.66 KB 2023-07-24 23:10:59,393 INFO [M:0;jenkins-hbase4:42745] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=76.23 KB at sequenceid=175 (bloomFilter=true), to=hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/4bdff810727440a78d5f54da403ed875 2023-07-24 23:10:59,399 DEBUG [M:0;jenkins-hbase4:42745] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/4bdff810727440a78d5f54da403ed875 as hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/4bdff810727440a78d5f54da403ed875 2023-07-24 23:10:59,404 INFO [M:0;jenkins-hbase4:42745] regionserver.HStore(1080): Added hdfs://localhost:34031/user/jenkins/test-data/8bfda62e-4ca5-6299-644a-82c4db9fef9f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/4bdff810727440a78d5f54da403ed875, entries=22, sequenceid=175, filesize=11.1 K 2023-07-24 23:10:59,404 INFO [M:0;jenkins-hbase4:42745] regionserver.HRegion(2948): Finished flush of dataSize ~76.23 KB/78055, heapSize ~90.65 KB/92824, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 21ms, sequenceid=175, compaction requested=false 2023-07-24 23:10:59,406 INFO [M:0;jenkins-hbase4:42745] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 23:10:59,406 DEBUG [M:0;jenkins-hbase4:42745] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 23:10:59,409 INFO [M:0;jenkins-hbase4:42745] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-24 23:10:59,409 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 23:10:59,410 INFO [M:0;jenkins-hbase4:42745] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:42745 2023-07-24 23:10:59,411 DEBUG [M:0;jenkins-hbase4:42745] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,42745,1690240254672 already deleted, retry=false 2023-07-24 23:10:59,441 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): regionserver:42651-0x101999a05ba000b, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 23:10:59,441 INFO [RS:3;jenkins-hbase4:42651] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,42651,1690240256459; zookeeper connection closed. 2023-07-24 23:10:59,441 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): regionserver:42651-0x101999a05ba000b, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 23:10:59,441 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4459147f] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4459147f 2023-07-24 23:10:59,541 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): regionserver:34441-0x101999a05ba0001, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 23:10:59,541 INFO [RS:0;jenkins-hbase4:34441] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34441,1690240254844; zookeeper connection closed. 2023-07-24 23:10:59,541 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): regionserver:34441-0x101999a05ba0001, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 23:10:59,552 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@6019fb47] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@6019fb47 2023-07-24 23:10:59,641 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 23:10:59,641 INFO [M:0;jenkins-hbase4:42745] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,42745,1690240254672; zookeeper connection closed. 2023-07-24 23:10:59,641 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): master:42745-0x101999a05ba0000, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 23:10:59,741 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): regionserver:39455-0x101999a05ba0002, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 23:10:59,741 INFO [RS:1;jenkins-hbase4:39455] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,39455,1690240254990; zookeeper connection closed. 2023-07-24 23:10:59,741 DEBUG [Listener at localhost/33659-EventThread] zookeeper.ZKWatcher(600): regionserver:39455-0x101999a05ba0002, quorum=127.0.0.1:61494, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 23:10:59,742 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@903f2f3] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@903f2f3 2023-07-24 23:10:59,742 INFO [Listener at localhost/33659] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-24 23:10:59,742 WARN [Listener at localhost/33659] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 23:10:59,746 INFO [Listener at localhost/33659] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 23:10:59,848 WARN [BP-106711482-172.31.14.131-1690240253959 heartbeating to localhost/127.0.0.1:34031] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 23:10:59,848 WARN [BP-106711482-172.31.14.131-1690240253959 heartbeating to localhost/127.0.0.1:34031] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-106711482-172.31.14.131-1690240253959 (Datanode Uuid a04b25c7-1e04-465b-b09e-ec2725065e25) service to localhost/127.0.0.1:34031 2023-07-24 23:10:59,849 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/cluster_f5253f4f-42b6-b460-06c2-48044aa3ed75/dfs/data/data5/current/BP-106711482-172.31.14.131-1690240253959] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 23:10:59,849 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/cluster_f5253f4f-42b6-b460-06c2-48044aa3ed75/dfs/data/data6/current/BP-106711482-172.31.14.131-1690240253959] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 23:10:59,850 WARN [Listener at localhost/33659] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 23:10:59,856 INFO [Listener at localhost/33659] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 23:10:59,960 WARN [BP-106711482-172.31.14.131-1690240253959 heartbeating to localhost/127.0.0.1:34031] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 23:10:59,960 WARN [BP-106711482-172.31.14.131-1690240253959 heartbeating to localhost/127.0.0.1:34031] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-106711482-172.31.14.131-1690240253959 (Datanode Uuid 9326d3f2-1f0c-460c-a0e6-afc18ce4b612) service to localhost/127.0.0.1:34031 2023-07-24 23:10:59,961 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/cluster_f5253f4f-42b6-b460-06c2-48044aa3ed75/dfs/data/data3/current/BP-106711482-172.31.14.131-1690240253959] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 23:10:59,961 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/cluster_f5253f4f-42b6-b460-06c2-48044aa3ed75/dfs/data/data4/current/BP-106711482-172.31.14.131-1690240253959] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 23:10:59,962 WARN [Listener at localhost/33659] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 23:10:59,965 INFO [Listener at localhost/33659] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 23:11:00,068 WARN [BP-106711482-172.31.14.131-1690240253959 heartbeating to localhost/127.0.0.1:34031] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 23:11:00,068 WARN [BP-106711482-172.31.14.131-1690240253959 heartbeating to localhost/127.0.0.1:34031] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-106711482-172.31.14.131-1690240253959 (Datanode Uuid a3d511de-122d-4ff4-8fc5-040fe3fb445e) service to localhost/127.0.0.1:34031 2023-07-24 23:11:00,068 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/cluster_f5253f4f-42b6-b460-06c2-48044aa3ed75/dfs/data/data1/current/BP-106711482-172.31.14.131-1690240253959] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 23:11:00,069 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/80ad4928-d02f-956d-bc2a-07816c507d88/cluster_f5253f4f-42b6-b460-06c2-48044aa3ed75/dfs/data/data2/current/BP-106711482-172.31.14.131-1690240253959] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 23:11:00,079 INFO [Listener at localhost/33659] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 23:11:00,193 INFO [Listener at localhost/33659] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-24 23:11:00,219 INFO [Listener at localhost/33659] hbase.HBaseTestingUtility(1293): Minicluster is down